Empowering Faculty through Assessment

By Katherine P. Simpson

from Inquiry, Volume 14, Number 1, Spring 2009, 41-53

© Copyright 2009 Virginia Community College System

Return to Previous Page


Abstract
This article explains how faculty and administrators at Lord Fairfax Community College have made assessment a vital practice across campus.

Since college teachers have a responsibility and a desire to promote their students’ intellectual development . . . understand the structure of knowledge in their disciplines and have opportunities to observe learning in progress every day, they can contribute greatly to the improvement of their own teaching, and our understanding of student learning, by becoming astute observers and skilled assessors of learning in process. (Angelo and Cross, 1993, p. 117)

Instructors at community colleges around Virginia have been assessing student learning informally and formally since their opening, but recently, with renewed emphasis on accountability and documentation, we brought a group of faculty and administrators together to study the assessment process and develop procedures to ensure that student-learning objectives (SLOs) are at the forefront of instruction. At Lord Fairfax Community College (LFCC), we hoped to build a culture of assessment and continuous improvement.

As luck would have it, shortly after the LFCC committee began its renewed emphasis on assessment, the Virginia Community College System (VCCS) revised the general education requirements (November, 2006) to specify very clear SLOs, focus on understanding personal, social, and civic values, and ensure proficiency in skills and competencies essential for all college-educated adults. SLOs identify the measurable knowledge, skills, behaviors, or attitudes of the learner as the result of engaging in a learning activity or program. Student-Learning Outcomes (also referenced as SLOs) refer to assessment-task results. Course assessment measures the learning that takes place in all sections of the course for the entire college and should not be confused with assessment of instructors or employee evaluation.

Program leaders at LFCC used the matrix provided by the VCCS to identify general education goals for each of the courses in their program cluster. This made it possible to document the way students taking a variety of courses at the college would be exposed to general education goals while they completed courses within a program of study. Program leads were looking at courses in relation to overall programs. Frequently, content and general education requirements were closely related, validating student learning across disciplines and degrees at the program and course levels. In other cases, specific SLOs accentuated content rather than general education goals. Beginning in spring of 2007, faculty assessed VCCS general education requirements alongside content-related SLOs. When people saw this clear relationship between what was going on in courses, program objectives, institutional mission, goals mandated at the state level, and accreditation requirements, they were motivated to continue with the assessment initiative.

The Southern Association of Colleges and Schools (SACS) requires colleges to identify a focus area, goal, and action plan to improve the quality of learning. This plan is the roadmap that the institution follows for five years and includes objectives, measurable assessment tasks, and responsible parties. At the same time that our assessment initiative and the VCCS general education goals were developing, LFCC was selecting critical thinking as the focus for its Quality Enhancement Plan (QEP) for SACS. Faculty would build a culture of assessment that relates to new general education requirements and includes critical thinking as the course-related student learning objective.

This enhanced understanding of the value of assessment as a basis for continuous improvement added a strong evaluation component to the QEP process. The goal of the QEP – to involve all faculty, all students, all courses, and have a positive effect on everyone involved in the life of the college – was within reach.

Creating a Plan

We began to review the first courses in the three-year cycle of course review in spring of 2007. Initially, administrators thought that program leads would want to begin with just 10 courses, but as program leads and faculty saw the overwhelming need for accountability, they challenged themselves to almost triple the original expectation. By starting small and letting faculty energy drive the momentum, the final decision was to assess 29 pilot courses in the first cycle. The assessment team, made up of program leads and others, would then continue to work with faculty to build confidence and competence in the assessment initiative. This involved collaboration: writing SLOs, implementing assessment tasks, analyzing results, and determining actions to take, based on results, for the purpose of increasing student learning at LFCC.

When the Assessment Committee met in fall of 2006, they identified a primary goal: to develop a clear set of relevant and measurable SLOs at the course level. In order to accomplish this, the group identified tasks:

Phase 1:  Getting Everyone Involved

To help everyone understand our reason for documenting student learning at the college, we had all faculty attend division meetings and view a presentation introducing assessment vocabulary and design. The presenter emphasized faculty decision-making as an integral part of the assessment process.  Rather than have an outsider mandate student learning objectives (SLOs), the discipline faculty would determine SLOs for courses.

The most important information included an introduction to the Course Assessment Guide (CAG) that the assessment committee at LFCC had designed. This template identified a SLO, the assessment task, how instructors would measure the task and the benchmark or expected outcome, results (if students met the learning benchmark), and actions based on results (a plan for improvement in future offerings of the course). Direct assessment methods were defined as data that would give instructors measurable data to study.  Examples include written exams, oral exams, performance assessments, standardized tests, licensure exams, oral presentations, projects, demonstrations, case studies, simulations, portfolios, and juried activities with outside panels. Indirect assessment methods were defined as tasks that would provide extra information that might be used to make changes; examples include questionnaires, interviews, focus groups, employer-satisfaction studies, advisory board, and job/grad-school placement data.

Faculty were also introduced to the three-year cycle of assessment that LFCC would use to meet VCCS and SACS accreditation requirements. Even if the courses individuals taught were not listed in the upcoming semester, everyone would learn about assessment strategies and get comfortable with assessment procedures as the groups practiced on the pilot courses. SLOs would be required on every course syllabus at LFCC, whether or not the course was undergoing review during this first assessment cycle. While the presenter was positive and enthusiastic about the process, it was clear that participation in this assessment initiative at LFCC was not optional. Unifying SLOs across course sections, assessing student learning, and including the final phase (collaborating on results and determining necessary changes to increase and extend student learning) had been set as requirements for all educators at LFCC.

The primary question faculty raised was about why grades don’t work as a measure of assessment.  To this, the presenter gave several responses:

To model writing SLOs in a straightforward and non-threatening manner, the following chart was presented.  It uses levels of understanding from Bloom’s taxonomy, combines them with action verbs, and sets up examples for a variety of disciplines.

  

Table 1: Student Learning Objectives (SLOs)

If I want to measure knowledge outcomes,
I might write…

The student will…

         Describe the basic components of empirical research.

         Give examples of major themes or styles in music, art, or theatre.

         Recognize in complex text local, rhetorical, and metaphorical patterns.

If I want to measure comprehension outcomes, I might write…

The student will…

         Correctly classify a variety of plant specimens.

         Explain the scientific method of inquiry.

         Summarize the important intellectual, historical, and cultural traditions in music, art, or theatre from the Renaissance to modern times.

If I want to measure application outcomes,
I might write…

 

The student will…

         Demonstrate in the laboratory a working knowledge of lab-safety procedures.

         Apply oral communication principles in making a speech.

         Compute the area of a room.

         Use editing symbols and printers’ marks.                

If I want to measure analysis outcomes,
I might write…

 

The student will…

         Distinguish between primary and secondary literature.

         Diagram a sentence.

         Listen to others and analyze their presentations.

         Differentiate between historical facts and trivia.

If I want to measure synthesis outcomes,
I might write…

The student will…

         Revise faulty copy for a news story.

         Formulate hypothesis to guide a research study.

         Create a poem, painting, or design for a building.

If I want to measure evaluation outcomes,
I might write…

 

The student will…

         Compare art forms of two diverse cultures.

         Critically assess an oral presentation.

         State traditional and personal criteria for evaluating works of art.

         Draw conclusions from experimental results.

 

One particularly effective method of gaining faculty support at this stage was to explain that while 57 full-time faculty were confident about teaching in their content area,  263 part-time faculty (with higher numbers every year) needed guidance and direction.  Shouldn’t full-time faculty set the expectations for student learning at LFCC? Courses at LFCC needed to have common SLOs to ensure that each student would get the best education possible. For example, one instructor’s English 111 class should not be completely different from other English 111 class; all students completing English 111 should exit the course having met the documented SLOs. If a technique was working well in a course, couldn’t that instructor share the strategy so that other students would benefit? Professional scholarship emphasizing dialogue between instructors is not meant to threaten but instead to enhance collegiality.

Faculty also signed up for a professional development workshop at which discipline groups would meet and collaborate on the course objectives. Adjunct faculty were invited but not required to attend this session. In retrospect, it would be preferable to have this type of workshop scheduled during faculty in-service or research days, but our college needed to move forward quickly in order to get some pieces in place prior to the beginning of the spring semester.  Changing a culture of academic independence to academic collaboration takes time; the sooner colleges begin the assessment initiative, the better.

The Assessment Audit

Between the division meeting and the professional development workshop, faculty completed an online assessment audit. Overall, 83 full-time and adjunct faculty members completed the assessment audit. 

Results showed that 79 percent who responded say that they share ideas with peers in order to improve student learning. This data added value to the professional-development workshop as educators gathered to discuss, study, and extend assessment practices.  Results showed that 98 percent of faculty assessed their students’ knowledge, 91 percent their skills, 48 percent their attitude, and 43 percent their behavior. Faculty reported that 89 percent of them used tests, 81 percent quizzes, 32 percent student portfolios, and 30 percent rubrics in their course grading. Responding to a question asking what other assessment strategies faculty used, 64 percent gave examples of writing tasks, group and individual presentations, seminar discussions, and more. Many listed student feedback as a qualitative indicator, as is reflected in the following quotes:

Information from the results of the assessment audit helped us to establish benchmark data before faculty embarked on procedures to expand LFCC’s existing assessment practices.

 Professional-Development Workshop

Information in the workshop included results of the assessment audit and methods of writing SLOs. Assessment committee members had developed digital presentations called Educator-2-Educator that showed how to complete CAGs for a variety of disciplines. After the workshop, these were available on the Office of Institutional Research and Effectiveness (OIRE) website, so that participants could reference them in the future.

Faculty were introduced to tasks they would complete during the workshop. Program leads and other members of the assessment team facilitated discipline break-out groups to emphasize the relevance of assessment in the context of the content area and to promote faculty leadership in the area of assessment. Participants separated into groups to work on SLOs for a course in their program. They were encouraged to do the following:

Task #1: Course Content Summary.  Program leads convened their groups of faculty.  They reviewed the pilot course content summary to determine whether the existing summary reported necessary content components.  For each course, groups identified at least two VCCS general education requirements, one of which was to be in the area of critical thinking. All course content summaries had to be revised before the groups could move to SLOs for the course. In some cases, course content summaries did not yet exist, so groups had to write these before moving to the second task. Guided by program leads, faculty collaborated on important decisions that would affect teaching and learning at LFCC.

Task #2: CAG for One SLO. Using the course objectives for the first pilot course (from course-content summary) as a guide, program leads and faculty members wrote one SLO together.  Program leads encouraged instructors to think about evidence and artifacts that would show the objective had been met; these artifacts, or evidence in the form of documents, would be from instructors and students. After completing one SLO, the group discussed assessment tasks that would capture measurable elements. They also shared best practices and activities they had used to increase student learning in the past.

Task #3: CAG for Three to Five SLOs.  Working under the direction of the program leads, faculty members continued to add three to five SLOs and completed a CAG for each SLO in each course. The main area of controversy was about the SLO assessment task for each course:  Did the task have to be the same for all classes? Would all instructors be required to use the designated test, project, or presentation?  Would all courses, even those delivered online or as hybrid, have to follow the same mandate as the task for the traditional courses?  At this stage, while everyone was learning about the process, faculty needed as much continuity as possible across tasks to make compiling results and using data efficient. Faculty voiced concerns and objections within groups and, in some cases, heated debate took place before they reached compromise. Once SLOs were identified for each of the pilot courses, program leads went back to the course content summary and listed newly developed SLOs. From this time forward, SLOs would be on LFCC course content summaries.

Faculty members completed an online survey to evaluate their experiences in the workshop. Results showed that faculty felt much more comfortable and prepared for the upcoming Phase 2 after attending the workshop and discussing assessment issues with colleagues. With positive results from the workshop, the assessment committee hoped that faculty at LFCC would have a similar experience to the one Angelo and Cross (1993) describe: “It appears that once teachers begin to raise questions about their own teaching and to collect data about its impact on learning, there is a self-generated pressure to raise questions and discuss findings with colleagues . . . teachers build networks and establish channels of communication” (p. 382). While this occurs informally on a regular basis, institutions are held accountable when “formal, institutionally recognized groups are engaged in continuing intellectual exploration of research and its application to [learning in courses]” (p. 383). We were well on our way!

 Phase 2: Implementation

A guest speaker familiar with SACS requirements addressed the faculty about the importance of assessment at the Spring 2007 convocation. Faculty resources were in place that included the assessment website with forms and sample presentations, a Blackboard site ready for instructors to store electronic artifacts, and new books in the library. Educators at LFCC were ready to embark on Phase 2 of the assessment cycle: it was time to teach, measure student learning by conducting the assessment task, and collect results.

 Table 2: The Process

 

Clarify SLOs

Develop systematic assessment tasks appropriate for discipline*

Decide how faculty groups will collect and report on results*

Teach course with SLOs in  syllabus

Conduct assessment tasks at designated time and collect results

Analyze and share results with discipline faculty*

Determine next steps to take and submit report

*Collaboration is important; a one-size fits all is not. Professional expertise guides decisions making.

 

All faculty had added SLOs to their course syllabi; however, only courses undergoing review, the pilot courses, had the same SLOs and assessment tasks on each instructor’s syllabus. A CAG for each of the pilot courses specified the SLOs, the assessment task, and the expected outcome (in measurable terms). Submitted to the assessment coordinator with blanks left in the categories of Results and Actions Taken, these templates would be filled in during the evaluation phase of the process. Again, faculty were assured that course assessment was not instructor assessment; course assessment was defined as assessment of the learning that takes place in all sections of the course for the entire college. Online classes would be assessed the same way that traditional classes were assessed, following the three-year cycle of course assessment.

As had been the case when the committee began the initiative, program leads emphasized that there was not one way to conduct the assessment task but a variety of ways, depending on the discipline and the approach the group chose when they convened during the workshop. In some cases, faculty would conduct pre- and post-test assessments and report results on both. In other cases, faculty submitted questions that, once compiled, became the exit exam for the course. Portfolios worked for other discipline groups, especially when the sample size was manageable. Program leads shared ideas about using rubrics to evaluate students in more qualitative ways. For example,

 Measurements and Meanings

Each CAG detailed the task and the expected outcome in measurable terms. For example, one read, “95 percent of students will successfully complete 100 percent of the skills on the standardized nursing skills checklists (Prentice Hall Fundamentals of Nursing) and continue in the nursing program; 95 percent of students will successfully complete standardized testing through an independent testing service (ATI Fundamentals of Nursing) achieving a benchmark of 64 percent on a proctored exam.” In another example, “class average of 70 percent or higher” was used. Data would not be the same across disciplines, but educators in all areas felt that they had made appropriate decisions that would give them results to use in future decision-making situations.

Determining research methodology is based on asking and answering questions. Although documenting evidence that students achieved the SLOs was the primary goal, faculty groups were also considering the following types of questions as they were determined ways to set up assessment tasks and reporting methods:

Faculty collected evidence of student learning to establish a need for change, to test assumptions held about student learning, and to provide base data to document significant changes that would occur in the future.

Phase 3:  Closing the Loop

Learning assessment at the course level provides data both on individual student performance for grading purposes and on the overall effectiveness of instruction for identifying those areas that require improvement. Rather than just taking a summative approach to

 Table 3: Evaluation Stages

Methodology

Evidence

Analysis

Enhancement

 Decide what the assessment task is and when it will be conducted.

 

 Conduct the experiment and collect the results.

Some groups will have individual instructors analyze the results and then report on them with the group. Then these results will merge into a view of the course as a whole.  Other groups will choose to evaluate the assessment task together and report on these results.

 How will our analysis impact future teaching and learning that goes on at LFCC?

What is most important is that instructors are studying student learning within classes and across classes to view the course as a whole, talking about student learning, and collaboratively planning ways to enhance student learning.

assessment, faculty can build assessment pieces into the course throughout the semester. When instructors clarify learning goals and give feedback on student learning, students are better able to assess their own progress in meeting the goals during the course. Educators get a good sense of what is and is not working and have the opportunity to make adjustments along the way. Angelo and Cross’ Classroom Assessment Techniques: A Handbook for College Teachers (1993) gives many ideas that educators can use to collect data about student learning on a regular basis. Some colleges have built their whole assessment initiative around these strategies. Reporting in group dialog achieves the required results: educators share professionally, consider best practices, evaluate student learning, and take action based on new awareness.

At LFCC, program leads were instrumental in choosing procedures that their faculty members would follow during the implementation and evaluation process.  As Diamonds note in Designing and Assessing Courses & Curricula (1998), “No two evaluations will be the same. In each instance the evaluation must be structured to serve the information needs of those involved in the decision-making process” (p. 241). On the other hand, people are more comfortable with a process when they know that there is guidance and support, that there are rewards for engaging in the process (particularly when it is somewhat unfamiliar), and that colleagues value the interaction.

What LFCC faculty had in place in Spring 2007:

·   Course Content Summaries for each course that included SLOs.

·    Course Assessment Guides with SLOs (at least two relating to general education requirements, one of which identified a critical thinking objective) and assessment tasks.

·   A strong sense of program lead as guide who is able to assist with the process, designate assessment periods, explain how to collect results (hard copies of tests, papers, portfolios), and assist with methods of reporting results (copies of rubrics filled in, compiled results from classes, a random class sampling to review).

Where LFCC faculty have headed since Spring 2007:

 A New Culture

How can we tell if an institution has achieved a culture of assessment?  A college has achieved its goal when faculty are willing – and even eager – to do the following:

Looking at these criteria, we can see that LFCC will continue to build this culture.  Our assessment committee, administrators, and faculty believe that embarking on this continuing journey will reap many positive results, including accreditation from SACS (for without that, we at LFCC can not hope to fulfill our mission, vision, and teaching and learning goals). Secondly, our college can meet its responsibility to VCCS to conduct institutional, program, and course assessment and report on our findings. Thirdly – and perhaps most importantly – while working with colleagues on student learning research may raise more questions than it answers, and while it takes time, this project also increases intellectual excitement. In fact, teachers at other institutions have overwhelmingly endorsed interaction with colleagues as the most important benefit of the assessment initiative.

As Diamond reports, “A campus culture that accepts assessment as part of the business of teaching and learning and supports self-evaluation makes ongoing improvement possible” (p. 285). With this in mind, faculty and administrators at LFCC are committed to this culture – to an ongoing process of documenting student learning as the important component of institutional effectiveness.

 

References

Angelo, T., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco, CA: Jossey-Bass Publishers.

Diamond, R. (1998). Designing and assessing courses and curricula. San Francisco, CA: Jossey-Bass Publishers.

Regional accreditation and student learning: A guide for institutions and evaluators (2004). Council of Regional Accrediting Commissions. Retrieved December 2006, from http://www.sacscoc.org/pdf/handbooks/GuideForInstitutions.pdf 

 


Dr. Katherine P. Simpson is an English professor at Lord Fairfax Community College.

Return to Previous Page