Program Level Assessment

Program level assessment evaluates the collective impact of an academic program on student learning and development, focusing on the effectiveness of the program in achieving its program outcomes.

The purpose of a program assessment plan is to guide continuous improvement by providing faculty with a clear framework for evaluating program outcomes. This includes detailing what will be measured, the timing, who will be responsible for data collection, and how the results will be applied to enhance the program.

At the University of Missouri, all academic programs must have an assessment plan that includes:

  • Specific and measurable program outcomes that define what students should know or be able to do by the end of their program.
  • Specific and measurable course learning objectives for every course.
  • Clear alignment of each course learning objective with one or more program outcomes, demonstrating how individual courses contribute to overarching program outcomes.
  • A structured approach to collecting direct evidence of student learning, with at least one program outcome assessed per year in a rotating cycle. This strategy aims to evaluate all program outcomes within a five-year period, ensuring a comprehensive and ongoing assessment process.

Inputting Program Outcomes and Student Learning Plan into the MU Educational Assessment App

All data should be inputted into the MU Educational Assessment App.

Collecting Direct Evidence of Student Learning to Program Outcomes

Evidence must be linked to individual program outcomes to assess student learning. For example, in a Biology Program with a program outcome: “At the end of this program, students will be able to analyze and interpret complex genetic data,” direct evidence could be derived from a course’s lab report where students analyze genetic sequences and predict outcomes. Student grades cannot be considered direct measures because they reflect multiple program outcomes.

Setting Benchmarks and Targets

Academic programs should identify their benchmarks and targets for each assessed program outcome.

BenchmarkThe minimally acceptable level of performance (e.g., 3 out 5 on an analytic rubric)
TargetThe desired percentage of students reaching or surpassing the benchmark (e.g., 75% of students will achieve 3 or higher on the analytic rubric)

Establishing benchmarks and targets for each program outcome ensures that programs have clear expectations for student performance and achievement goals.

Determining How You Will Assess Program Outcomes

Direct evidence of student learning can take multiple forms, including capstone projects, student portfolios, presentations, exams, standardized tests, and more. Embedded assessments offer a streamlined and long-term solution for program assessment.

Embedded Assessments are course assignments (e.g., presentation, exam question, etc.) that can also be used as evidence of student learning of program outcomes. A program should identify how the assessment aligns to program outcomes (not just course learning objectives); this can be done through an additional rubric (e.g., analytic rubric listing program outcomes) created for the assessment of program outcomes. For example, an exam essay that addresses program outcomes can be graded not just for course performance but analyzed for evidence of meeting one or more program outcomes. Other embedded assessments could include:

  • Capstone Projects: These comprehensive projects require students to apply a range of skills and knowledge acquired throughout the program, directly reflecting on multiple program outcomes. They can be evaluated using a detailed rubric that aligns with these outcomes, offering a holistic view of a student’s proficiency.
  • Portfolios: A collection of a student’s work compiled over time, portfolios can demonstrate growth and learning across various program outcomes. They provide a rich source of evidence when assessed with criteria directly linked to program outcomes.
  • Lab Reports: In STEM programs, lab reports can serve as direct evidence of students’ abilities to conduct experiments, analyze data, and apply scientific principles. Key reports can be assessed against specific outcome-related criteria.
  • Performance Assessments: In fields such as performing arts, performances or practical demonstrations can be directly observed and evaluated against program outcomes, providing immediate and tangible evidence of student learning.

Examples of Embedded Assessment

In a Graphic Design program, students are required to submit a portfolio of their work at the end of their final year, which showcases their skills across various design areas, including typography, web design, and branding. A panel of faculty members evaluates these portfolios using a rubric that assesses five program outcomes related to design theory, technical skills, creativity, professional presentation, and project management. Each outcome is graded on a scale:

  • 1 = Insufficient
  • 2 = Basic
  • 3 = Competent
  • 4 = Proficient
  • 5 = Exceptional

The program sets a benchmark of 3 for minimal competency and has a target for at least 75% of students to achieve a level 3 or higher in all outcomes.

In a Business Administration program, students complete a group project that culminates in a presentation to industry professionals and faculty. The presentation is assessed on criteria that align with program outcomes such as market analysis, strategic planning, teamwork, and communication skills. Each outcome is assessed on a scale:

  • 1 = Does not meet expectations
  • 2 = Meets expectations
  • 3 = Exceeds expectations

Faculty and industry professionals use a standardized form to evaluate each presentation. The program has determined that a score of 2 is the benchmark, and the target is for 85% of groups to score a 2 on at least three of the four outcomes.

Signature Assessments

One type of embedded assessments is “signature assignments,” a type of real-world assessment integrated across student pathways to identify learning growth in a program. For insight into signature assessments, see the following:

  • Pinahs-Schultz, P., & Beck, B. (2016). Development and assessment of signature assignments to increase student engagement in undergraduate public health. Pedagogy in Health Promotion, 2(3), 206-213.
  • Roach, S., & Alvey, J. (2021, February 4). Fostering Integrative Learning and Reflection through “Signature Assignments.” Liberal Education. American Association of Colleges and Universities.


For assistance in any aspect of program assessment, please reach out to Jonathan Cisco, Director of Educational Assessment.

Read More