Office of Institutional Effectiveness
Assessment of Learning
“The real goal of classroom assessment is to improve student performance, not merely audit it.” – Sousa and Tomlinson
“Assessment in this spirit [Assessment For Learning] does not concern the assignment of grades or evaluation of whether instruction was effective. It’s assessment designed squarely to feed into the learning process and make the learner stronger.” -- David N. Perkins
“All Assessment is a perpetual Work in Progress” – Linda Suskie
This site contains information on our most current accreditation review. Please see the WASC Accreditation site for information on our previous review.
University Faculty Director of Assessment and Professor of Communication
Introduction to Assessment
Why Assess Student Learning Outcomes?
Faculty and departments already evaluate the performance of students in their courses and programs in order to provide feedback to students, maintain the quality and effectiveness of the curriculum, and track student's progress as they complete the requirements for their degree. In one sense, therefore, the concept of assessment is neither new nor unrelated to teaching and university service, although expectations for assessment have changed some extent over time. Furthermore, faculty are increasingly being asked to make the assessment process more transparent for students and to document their results by writing reports and sharing them with internal and external stakeholders. Although assessment reports are now being utilized for multiple purposes, the primary aim of assessment is still to evaluate student work relative to specific outcomes in order to support and improve student learning.
Meaningful assessment is beneficial for students, faculty, and departments for the following reasons:
- Clearly stating measurable outcomes and communicating the key outcomes in which students are expected to demonstrate proficiency, by the end of a course or the completion of a program, facilitates the successful achievement of these outcomes by students.
- Valid assessments use measures (assignments) that are aligned with the student learning outcomes and this enables faculty and departments to determine whether or not each student in the sample has achieved the expected level of proficiency. Furthermore, when an assignment really focuses on the designated student learning outcome, the assessment results provide a wealth of detail that can identify patterns and strengths and weaknesses in student performance.
- Documenting relevant and valid data in regard to student learning outcomes will enable programs to either demonstrate the need for changes and specific kinds of support or to demonstrate that their students are meeting expectations in regard to the relevant student learning outcomes. Results can be provided internally to administrators as well as to legislative bodies and accreditation agencies such as WASC Senior College and University Commission (WSCUC).
Every department/program at Fresno State has developed and implemented a Student Outcomes Assessment Plan; assessment activities should correspond to the SOAP. The SOAP should also be periodically reviewed and revised as necessary. If needed, it can be revised mid-year to change the assessment activities that will be carried out that year. Additional information and examples can be found on the SOAP page of this website.
Overview of the Assessment Process:
Next: Programmatic Purposes
Assessment at Fresno State
The assessment of student learning, student development, and program outcomes is essential to the health and vitality of academic programs at California State University, Fresno. The primary purpose of assessment at the course, program, and institutional level is to further student engagement and student learning. However, assessment results are also used by programs to advance student learning through improved instruction and curricula. Accreditation agencies for various professional areas as well as the institution itself all have assessment standards that require the connection of student performance to a program’s stated mission, goals, and outcomes.
Planning at the programmatic level to assess student learning outcomes is reflected in the program’s SOAP. The SOAP which should be maintained at the department/program level and should be updated periodically to reflect assessment activities and results and changing circumstances.
Since every academic program has a SOAP, and has carried out on-going assessment, the emphasis now is on how programs can use assessment results in order to either demonstrate that students are achieving the designated learning outcomes or to make adjustments to the curriculum and program in order to assist student's in achieving proficiency in the required learning outcomes. Program level assessment is also directly connected to institutional values and outcomes which can be reviewed by clicking on the link below.
Director of Assessment at Fresno State
If you have any questions, especially while revising your department SOAP or while planning and implementing assessment activities you can reach out to your college assessment coordinator or e-mail the Director of Assessment at firstname.lastname@example.org
Dr. Douglas Fraleigh has been appointed as the second Director of Assessment at Fresno State effective August 20, 2020. Dr. Fraleigh has served as the college assessment coordinator since these positions were created in 2013. Dr. Fraleigh served as the Chair of the ad hoc oral communication core competency committee during the 2015-2016 AY. Dr. Fraleigh has served as a member of the Learning Assessment Team since 2014 and has worked to advance effective assessment at the University. Dr. Fraleigh and will now lead assessment efforts at Fresno State. The previous Director of Assessment, Dr. Melissa Jordine, accepted a position at Central Washington University beginning in Fall 2020.
Learning Assessment Team and College Assessment Coordinators
The Learning Assessment Team (LAT) aims to strengthen outcomes assessment across the campus. The LAT reviews all major assessment reports on an annual basis. The coordinator of this team communicates with departments regarding their reports. Members also provide an overview of the status of programs' assessment to the Deans, Provost, and departments on the status of programs' assessment.
College Assessment Coordinators: The Provost, Vice-Provost and Deans worked to together to create College Assessment Coordinator positions. One faculty member in each College was designated to work with every department to facilitate the planning and implementation of annual assessment activities. Current Assessment Coordinators are as follows
CAH - Prof. Matthew Hopson-Walker, MFA
CHHS -- Dr. Kara Zografos
COSS -- Dr. Monica Summers
CSB -- Dr. Jennifer Miele
CSM - Dr. Paul Price
JCAST - Dr. Serhat Asci
KSOEHD -- Dr. Jessica Hannigan
LCOE -- Dr. Ching Chiaw Choo
Library -- Ms. Sarah McDaniel, MA and MLIS
Advanced Assessment Methods
There are a number of resources available for departments that want to do more in-depth studies of specific outcomes such as writing or critical thinking over multiple semesters. The website links below provide information on larger scale projects implemented by other institutions. Assessment coordinators can contact The Center for Faculty Excellence and work with Instructional Designers to identify and receive training on how to use technology such as Pearson's My Writing Lab, and Qualtrics Surveys to carry out assessment activities. Assessment coordinators can review the proposal requirements for OIE Assessment Grants to determine if their project would qualify for an OIE Grant. Dr. Melissa Jordine is available to discuss possible grant proposals and to work with departments who want to access basic statistics such as graduation rates for their majors using the Tableau Dashboards created by OIE data analysts. Department Chairs and Assessment coordinators can also contact the Vice-President for the Office of Institutional Effectiveness, Dr. Angel Sanchez to discuss working with OIE to carry out a case study.
ePortfolio Initiatives at Virginia Tech
Educause Learning Initiative: An Overview of E-Portfolios
Glossary and Resources
The deliberate use of measures/assignments that are directly relevant to, and genuinely able to assess, course content, as well as specific student learning outcomes.
The collection, analysis and use of evidence to improve student learning in courses and disciplinary or general education programs.
Assessment techniques can be applied to gather information on student learning objectives, in which case we can use direct measures of student learning or indirect measures of student learning. Assessment techniques can also be focused to capture program-level outcomes, including student retention rates, student graduation rates, student ethnicity, community interactions, to name a few.
The assessment process is embedded in relevant real-world activities
The criteria for assessing results compared to an empirically developed standard. An example would be the expectation that two-thirds of all students score a 3 out of 4 on the critical thinking rubric used to evaluate assignments.
Assessment to improve the teaching of specific courses and segments of courses.
Close the loop
Faculty discuss assessment results, reach conclusions about their meaning, determine what if any changes need to be made based on assessment results and implement appropriate changes.
Assessment techniques are brought to bear on the stated learning objectives of a given course. Assessment evidence may be collected during the semester within the course or at some time after the conclusion of the course. Assessment data gathered during the course reflects student progress in achieving stated course objectives, while data gathered after the conclusion of the course provides evidence of the persistence of stated learning outcomes over the time elapsed.
Presented in a matrix, a curriculum map relates program-level student learning outcomes (usually enumerated in individual rows) to the courses and/or experiences that students take in progress to graduation (usually captured in columns).
Direct Measures of Student Learning
In contrast to opinion surveys and instruments that gather self-reports of student knowledge and/or ability, direct measures of student learning are generated when student work is evaluated in order to determine their performance on a specific learning outcome. Third-party reports of what students know and can do represent direct measures of student learning when the reports are based on direct observation or review of student work submitted to the third party and are student-specific rather than summarized across a cohort of students.
Also referred to as course-embedded assessment, these techniques generate assessments of course-specific student learning outcomes entirely within the duration of the specific course. There are many assessment techniques that can be applied to routine assignments made within a course that can be summarized across multiple sections and/or multiple semesters to provide evidence of student learning at the program level.
Utilizes assessment techniques that emphasize the role of feedback in assessing how students are learning and then using the information to make beneficial changes in instruction and/or the learning environment. Formative assessment usually focuses on a limited set of specific outcomes, often a subset of the complete roster of outcomes identified by a program, and is focused primarily on the improvement of program delivery.
Assessment which occurs in G.E. classes. This means that most students are not majoring in the subject in which they are being assessed.
A rubric that involves one global, holistic judgment.
Indirect Measures of Student Learning
Usually found in opinion surveys and instruments that gather self-reports of student knowledge, indirect measures of student learning are generated when students report on their own progress of learning, what experiences they attribute their learning to, how they feel about what they know, and what students value as a result of their educational experiences. Third-party reports of what students know and can do represent indirect measures of student learning when the reports are summarized across a cohort of students rather than student-specific.
How well two or more raters agree when decisions are based on subjective judgments.
Major Assessment Report
Initially information on assessment was included in the annual report that was and is still due in May. However, assessment activities are now described in a separate major assessment report that is due September 1st. There is now a template that should be used for this report and it can be found in the assessment reporting section of this website.
Norming or Calibration
Evaluators are normed or calibrated so they consistently apply standards in the same way
Assessments of student learning and development of program goals and outcomes provide program faculty opportunities to evaluate the effectiveness and status of their academic program at the same time they reflect vital information to use in improving curriculum and instruction. Program assessment is comprehensive across a set of prioritized program outcomes in contrast to course assessment that is limited to course-specific outcomes.
A sample created using predetermined criteria, such as proportional representation of students at each class level.
A sample in which every element in the population has an equal chance of being selected.
The degree of measurement precision and stability for a test or assessment procedure.
The proportion of contacted individuals who respond to a request.
An explicit scheme for classifying products or behaviors into categories that are steps along a continuum.
How well a procedure’s components, such as test items, reflect the full range of what is being assessed.
An assignment, task, activity, project purposefully created or modified to collect evidence for specific learning outcomes. The ‘signature’ part of the assignment is the defining characteristics that reveal deep thinking and help students think like disciplinary experts. Ideally, other coursework builds toward the signature assignment, meaning that the signature assignment should measure the culmination of what the student learned in the course for a particular outcome. Signature assignments work well when they are course embedded.
Signature assignments can be designed collaboratively by faculty. They can be generic in task, problem, case or project to allow for contextualization in different disciplines or course contexts.
A questionnaire that collects information about beliefs, experiences, or attitudes.
Student Outcomes Assessment Plan. A specific plan created by a department or program that clearly identifies goals and student learning outcomes, as well as specific direct and indirect measures that will be used to assess the department/programs SLO’s. At Fresno State all SOAP’s must be in the required template found in the section, of this website, titled SOAP’s. At Fresno State there are seven required elements for the SOAP or department assessment plan and the final required element is a discussion of closing the loop or how a department will use assessment results and how they will determine if any changes are necessary and implement changes if they are necessary.
Student Learning Outcome
Student Learning Outcomes are statements that describe significant and essential objectives that learners have achieved, and can reliably demonstrate at the end of a course or program. In other words, learning outcomes identify what the learner will know and be able to do by the end of a course or program. Students Learning Outcomes or SLO's must:
- reflect essential knowledge, skills or attitudes;
- focus on results of the learning experiences;
- reflect the desired end of the learning experience, not the means or the process;
- represent the minimum performances that must be achieved to successfully complete a course or program
The SLO's must be stated clearly and the description should use the appropriate verb depending on the level of the skill being demonstrated. Basic knowledge can be demonstrated by explaining or describing while an ability to make deductions can be demonstrated by analyzing a point or idea. Bloom's Taxonomy provides specific information on lower and higher order skills and the appropriate terms.
Utilizes assessment techniques that emphasize the comprehensive achievement of program outcomes across comparatively large student cohorts. While summative and formative assessment need not be mutually exclusive, the tenor of summative assessment is to provide evidence of accountability and achievement of comprehensive program outcomes compared to formative assessment, which focuses feedback to improve program delivery.
Multiple lines of evidence lead to the same conclusion.
Assignments that successfully measure what they are supposed to measure and align with an institution’s goals and objectives.
Value Added Measure
These are used to estimate or quantify how much of a positive (or negative) effect individual teachers have on student learning during the course of a given school year. To produce the estimates, value-added measures typically use sophisticated statistical algorithms and standardized-test results, combined with other information about students, to determine a “value-added score” for a teacher. School administrators may then use the score, usually in combination with classroom observations and other information about a teacher, to make decisions about tenure, compensation, or employment.
Guidelines for Using and Creating Rubrics
- Louisiana Dept. of Education Rubrics and Assessing Student Work
- Rubistar website for creating Rubrics
Rubrics for Assessing Essential Skills/Core Competencies
- Critical Thinking Scoring Guide
- CSB Oral Presentation Rubric
- AAC&U Information Literacy Rubric
- CSB Quantitative Rubric
- AAC&U Written Communication Rubric
- CSB Writing Rubric
- Theatre Arts Writing Rubric
- Carnegie Mellon Oral Communication Rubric
- St. John’s University Quantitative Reasoning Rubric
Rubrics for Assessing Digital/Multimedia/ePortfolio Projects
Rubrics for Specific Departments
- Carnegie Mellon University Philosophy Rubric
- Carnegie Mellon University History Rubric
- University of Rhode Island Art and Design Rubric
Rubrics for Graduate Programs
- Rhetoric and Composition Doctoral Rubric
- Thesis/Dissertation Proposal Rubric
- California State University, Fullerton's Rubric for English MA Essays
- California State University, Long Beach's Rubric for Peer Evaluation of Thesis Proposals
- Georgia State University-Graduate Oral Communication Rubric
- San Diego State University-Graduate Business Writing Rubric
- San Diego State University-Graduate Business Oral Communication Rubric