Transition to HLC favicon Transition to HLC
  • Home
  • Timeline
  • HLC Process Steps
    • Step 0. BOT Approval
    • Step 1. Letter to USED
    • Step 2. HLC Application
    • Step 3. HLC Preliminary Peer Review
    • Step 7. Letter to USED
  • Criteria
    • Criterion 1
    • Criterion 2
    • Criterion 3
    • Criterion 4
    • Assumed Practices
  • Resources
  • About
  • Suggesting Changes
    • View the Source
    • Suggest a Change

On this page

  • Resource Guide for HLC Criterion 3 (2025): Teaching and Learning for Student Success
    • Core Component 3.A: Educational Programs
      • Evidence to Demonstrate 3.A (Educational Programs)
      • Example Effective Practices for 3.A
      • Self-Assessment Questions for 3.A
    • Core Component 3.B: Exercise of Intellectual Inquiry
      • Evidence to Demonstrate 3.B (Intellectual Inquiry)
      • Example Effective Practices for 3.B
      • Self-Assessment Questions for 3.B
    • Core Component 3.C: Sufficiency of Faculty and Staff
      • Evidence to Demonstrate 3.C (Faculty and Staff Capacity)
      • Example Effective Practices for 3.C
      • Self-Assessment Questions for 3.C
    • Core Component 3.D: Support for Student Learning and Resources for Teaching
      • Evidence to Demonstrate 3.D (Student Support and Resources)
      • Example Effective Practices for 3.D
      • Self-Assessment Questions for 3.D
    • Core Component 3.E: Assessment of Student Learning
      • Evidence to Demonstrate 3.E (Assessment and Improvement)
      • Example Effective Practices for 3.E
      • Self-Assessment Questions for 3.E
    • Core Component 3.F: Program Review
      • Evidence to Demonstrate 3.F (Program Review and Improvement)
      • Example Effective Practices for 3.F
      • Self-Assessment Questions for 3.F
    • Core Component 3.G: Student Success Outcomes
      • Evidence to Demonstrate 3.G (Student Success and Improvement)
      • Example Effective Practices for 3.G
      • Self-Assessment Questions for 3.G

Other Formats

  • Typst

Criterion 3

Resource Guide for HLC Criterion 3 (2025): Teaching and Learning for Student Success

The Higher Learning Commission’s Criterion 3 focuses on Teaching and Learning for Student Success, emphasizing that an institution is responsible for the quality of its educational programs, learning environments, and support services (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading). Under the revised 2025 Criteria for Accreditation, Criterion 3 has been updated and expanded (incorporating elements from the old Criterion 4 of the 2020 Criteria) to address how institutions ensure academic quality and continuously improve student learning and success. This guide follows the structure of the 2025 Criterion 3 (Core Components 3.A through 3.G) and provides resource suggestions for each. It integrates evidence ideas from HLC’s guidance, example practices drawn from prior documentation, and embedded questions institutions can use to assess themselves. (For reference, see HLC’s official 2025 Criteria (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading).)

Core Component 3.A: Educational Programs

Revised Core Component 3.A (2025) – “The institution maintains learning goals and outcomes that reflect a level of rigor commensurate with college-level work, including by program level and the content of each of its educational programs.” (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading) In other words, institutions must ensure that curricula and learning outcomes are rigorous, current, and appropriate to the degree level and field. All programs (undergraduate, graduate, certificates, etc.) should have clearly articulated learning goals, and the quality and rigor should be consistent across different locations, modalities (online, hybrid, in-person), and delivery methods (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading).

Integration note: This component is similar to the former 3.A in the 2020 Criteria, which stressed appropriate rigor of academic offerings (and had subcomponents about up-to-date courses, differentiated learning goals by level, and consistency across modalities). The expectations remain that institutions define, differentiate, and uphold learning outcomes for each program level.

Evidence to Demonstrate 3.A (Educational Programs)

  • Program Learning Outcomes Documentation: Published program learning goals/outcomes for each academic program, showing they are at the appropriate level (e.g. outcomes for associate vs. bachelor’s vs. master’s programs) (Providing Evidence for the Criteria for Accreditation). For example, an academic catalog or website section listing program outcomes by degree level (Providing Evidence for the Criteria for Accreditation).
  • Curricular Rigor and Currency: Evidence that courses and programs are current and rigorous, such as recent curriculum committee minutes or graduate council minutes approving updates, and external program review reports confirming curricular quality. Ensure credit hour policy compliance documentation is available to demonstrate academic credit aligns with college-level work.
  • Differentiation by Level: Documents illustrating how learning goals escalate in complexity at higher program levels. This could include a framework for level differentiation (e.g. use of Bloom’s Taxonomy to distinguish undergraduate vs. graduate outcomes) and any linkages between undergraduate and graduate programs that show progressively higher expectations.
  • Consistency Across Modalities: Policies or reports showing that regardless of delivery mode or location, the same outcomes and standards apply. For instance, a syllabus template or guidelines used institution-wide, and comparative assessment results demonstrating that online and on-campus cohorts achieve the same learning outcomes.
  • Program Review of Curriculum: Although formal program review is covered under 3.F, any evidence of periodic curricular review for currency fits here as well. This could be summary reports from recent program revisions, advisory board feedback on curriculum relevance, or accreditation self-studies for specialized programs (Providing Evidence for the Criteria for Accreditation).

Example Effective Practices for 3.A

  • Comprehensive Catalog of Outcomes: Maintaining a public academic catalog that clearly articulates program learning outcomes for every credential, updated regularly (Providing Evidence for the Criteria for Accreditation). This helps ensure transparency about academic expectations and rigor.
  • Regular Curriculum Updates: Implementing a cycle (e.g. annual or biennial) for faculty committees to review and update curricula. Practice example: forming a curriculum review committee that involves faculty and administrators in reviewing syllabi, course content, and outcome alignment for rigor (Providing Evidence for the Criteria for Accreditation).
  • Use of Frameworks for Rigor: Adopting frameworks (like Bloom’s Taxonomy or Lumina’s Degree Qualifications Profile) to set and evaluate the rigor of learning goals at different levels. Institutions map course and program outcomes to these frameworks to verify appropriate cognitive complexity at each degree level.
  • Differentiated Admission or Progression Criteria: Ensuring that program entry requirements (GPA, prerequisites, portfolios, etc.) and advancement criteria align with the expected rigor of the program (Providing Evidence for the Criteria for Accreditation). For example, higher-level programs might require demonstration of foundational skills appropriate to advanced study.
  • Modality Equivalence Checks: Conducting comparative studies of student performance in different formats. For instance, an institution might analyze whether a course taught online yields similar achievement of outcomes as the same course on campus – and use the results to improve course design, thus evidencing consistency in quality.

Self-Assessment Questions for 3.A

  • Learning Outcomes: What are the learning outcomes for each of our programs, and do they clearly reflect college-level rigor? How do we ensure these outcomes remain relevant and challenging as disciplines evolve?
  • Curriculum Oversight: What processes do we use to develop, approve, and review curricula? Are faculty (and where appropriate, external experts) actively involved in reviewing course rigor and content currency (Providing Evidence for the Criteria for Accreditation)?
  • Level Differentiation: How do we articulate the differences in learning goals between certificate, undergraduate, and graduate programs? Can we demonstrate that graduate programs demand more advanced competencies than undergraduate programs (through curriculum design, assessments, etc.)?
  • Modality and Location: How do we guarantee that students receive the same quality education across all campuses, online courses, and other delivery methods? Do we have evidence (such as assessment data or program evaluations) showing consistency in student learning regardless of modality?
  • External Standards: Are our program outcomes aligned with external expectations (e.g. accreditation standards, licensure exams, industry requirements)? If applicable, do licensure or certification exam results indicate our programs are sufficiently rigorous (Providing Evidence for the Criteria for Accreditation)?

Core Component 3.B: Exercise of Intellectual Inquiry

Revised Core Component 3.B (2025) – “The institution’s educational programs engage students in collecting, analyzing and communicating information; in practicing modes of intellectual inquiry or creative work; and in developing skills adaptable to changing environments.” (Providing Evidence for the Criteria for Accreditation) This component addresses the breadth and depth of student learning experiences. Institutions should provide learning opportunities that foster critical thinking, inquiry, creativity, and adaptability. In practice, this often encompasses robust general education, research and creative activities, and curricular components that prepare students for a changing world.

Integration note: The new 3.B corresponds closely to the old 3.B (2020) which included engaging students in inquiry and also explicitly mentioned general education, diversity, and scholarly contributions. While the 2025 wording is more concise, colleges should still ensure that general education and co-curricular programs cultivate broad intellectual skills, include diverse perspectives, and encourage scholarly and creative achievements. These aspects are implied and supported by evidence expectations (e.g. general education learning goals, diversity initiatives, and student/faculty scholarship) (Providing Evidence for the Criteria for Accreditation).

Evidence to Demonstrate 3.B (Intellectual Inquiry)

  • General Education Program: Documentation of a coherent general education curriculum that aligns with the institution’s mission and engages students in broad learning. Provide the general education philosophy, learning outcomes, and requirements (e.g. gen ed outcome statements in the catalog). Also include any evidence that gen ed is regularly reviewed or approved by faculty governance.
  • Inquiry and Creativity in Curriculum: Examples of how programs require students to engage in research, inquiry, or creative work. This could include sample syllabi or assignments that demonstrate students must conduct research projects, artistic performances, scientific experiments, etc., as part of their coursework (Providing Evidence for the Criteria for Accreditation). Also, assessment data or results that show students are achieving information literacy or inquiry-related competencies (Providing Evidence for the Criteria for Accreditation).
  • Multiple Modes of Inquiry: Evidence that different modes of inquiry (scientific, artistic, humanistic, etc.) are incorporated. For instance, list of capstone projects, theses, exhibits, recitals, or other culminating works across various programs (Providing Evidence for the Criteria for Accreditation), or a requirement that all students take courses in diverse disciplines (STEM, humanities, arts).
  • Diversity and Global Learning: Records showing that the curriculum addresses human and cultural diversity, preparing students for a multicultural world. This might be multicultural course requirements, co-curricular diversity programming (with committee minutes or event summaries), study abroad participation, or inclusion of diverse perspectives in courses (per syllabi or course descriptions).
  • Student and Faculty Scholarship: Information on how faculty and students contribute to scholarship and creative work. Include annual research symposium programs highlighting student research, faculty publication lists, or undergraduate research initiatives (Providing Evidence for the Criteria for Accreditation). If applicable, note achievements like student theses, exhibits, or joint faculty-student research projects that demonstrate an inquiry-rich environment.

Example Effective Practices for 3.B

  • Integrated General Education: Ensuring the general education program has explicit learning outcomes (e.g. critical thinking, quantitative literacy, intercultural competence) and that these outcomes are assessed. Many institutions create a gen ed assessment report each year to gauge how well students are learning broad skills, and use results to tweak the curriculum.
  • High-Impact Learning Practices: Encouraging practices such as undergraduate research, service-learning, internships, and capstone projects across programs. For example, an institution might require a senior project or portfolio in all majors to integrate inquiry and communication skills. Internship and practicum site listings (Providing Evidence for the Criteria for Accreditation) show that students apply inquiry skills in real-world settings.
  • Showcasing Creative Work: Hosting events like research fairs, art exhibitions, or engineering project expos where students present their work. This not only documents student inquiry and creativity (as evidence) but also reinforces a culture of scholarly engagement. Practice example: Annual research symposiums with proceedings highlighting faculty and student scholarship (Providing Evidence for the Criteria for Accreditation).
  • Curriculum Design for Adaptability: Designing curriculum to include problem-solving and adaptability. Some institutions incorporate interdisciplinary projects or case studies that simulate complex, changing scenarios. For instance, courses that have students solve community problems or adapt projects to new constraints demonstrate this adaptability focus.
  • Faculty Development on Inquiry: Providing training and resources for faculty to integrate inquiry-based learning and creative assignments into their teaching. Workshops on undergraduate research mentoring or inquiry-guided learning are a good practice, indirectly evidenced by faculty development schedules or teaching innovation grants.

Self-Assessment Questions for 3.B

  • General Education: How does our general education program impart broad knowledge and essential skills (like critical thinking and communication)? Are the purposes and outcomes of gen ed clearly articulated and regularly reviewed?
  • Student Engagement in Inquiry: In what ways do our programs require students to engage in research or creative work? Can we point to examples of student work that demonstrate inquiry skills (research papers, art performances, lab projects)?
  • Curricular Breadth: Do all students experience multiple modes of inquiry? For example, do non-STEM majors get scientific reasoning, and STEM majors get exposure to humanities or arts? How do we ensure a well-rounded intellectual experience?
  • Diversity and Global Preparation: How does our curriculum prepare students to live and work in a diverse, global society? Are there requirements or electives focusing on diversity, equity, and inclusion? What co-curricular activities support this?
  • Scholarly Culture: What opportunities do students have to collaborate with faculty on research or creative projects? How do we recognize and support faculty and student scholarly achievements? (E.g., internal grants, symposiums, publication showcases.)

Core Component 3.C: Sufficiency of Faculty and Staff

Revised Core Component 3.C (2025) – “The institution has the faculty and staff needed for effective, high-quality programs and student services.” (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading) This component requires institutions to have adequate and qualified human resources to deliver their academic programs and support services. It covers faculty numbers, credentials, evaluation, and development, as well as the qualifications and support for staff who help students (advisors, tutors, student services personnel). While the 2025 wording is streamlined, it implies all the key elements from the older criteria: hiring the right people, maintaining appropriate faculty-student ratios, and supporting continuous improvement of faculty/staff performance.

Integration note: The new 3.C merges content from the old 3.C (2020), which was very detailed about faculty qualifications, evaluation, professional development, accessibility to students, and qualified staff for student support. One aspect of old 3.C – diversity of faculty/staff (old 3.C.1) – was moved to Assumed Practices. Nevertheless, institutions are still encouraged to show efforts toward a diverse faculty/staff profile. The core expectation is that faculty and staff are sufficient in number and appropriately qualified/trained to maintain academic quality.

Evidence to Demonstrate 3.C (Faculty and Staff Capacity)

  • Faculty Qualifications Roster: A comprehensive roster of faculty (full-time, part-time, adjunct, dual credit instructors) with their qualifications (degrees, credentials) and courses taught. This demonstrates that all instructors are appropriately qualified in the subjects they teach (meeting HLC’s qualification guidelines, typically a master’s degree or higher in the discipline, etc.).
  • Faculty and Staff Numbers: Data on faculty counts and student-to-faculty ratios for various programs and modalities (Providing Evidence for the Criteria for Accreditation). Also, information on student-to-staff ratios in key support areas (advising caseloads, counselor-to-student ratios) if available (Providing Evidence for the Criteria for Accreditation). Sufficient numbers ensure that programs and services can be delivered effectively.
  • Hiring Policies: Official hiring policies and procedures that ensure qualified faculty/staff are recruited. For example, guidelines that align with HLC’s faculty qualifications policy (such as requiring transcripts or credentials verification) (Providing Evidence for the Criteria for Accreditation). If relevant, evidence that hiring committees consider diversity and mission-fit (though diversity is now an assumed practice, it can still be documented as a strength).
  • Evaluation and Development: Documents showing regular evaluation of instructors and staff (e.g. faculty evaluation policies, tenure review criteria, student course evaluations summary). Also, evidence of professional development support: records of workshops, teaching training, or funding for conferences (Providing Evidence for the Criteria for Accreditation). For instance, a calendar of faculty development seminars, or a budget line for staff training, and how many participated.
  • Accessibility and Support Roles: Statements or policies about faculty responsibilities for student interaction (like office hour policies or advising expectations). Also, organizational charts or job descriptions for student support staff (advisors, tutors, counselors) showing they are qualified and trained. For example, summary of qualifications for Student Affairs staff and any ongoing training they receive (Providing Evidence for the Criteria for Accreditation).

Example Effective Practices for 3.C

  • Strategic Staffing Plans: Regularly analyzing staffing needs and making strategic hires to ensure each program has enough faculty and each service area has enough staff. Many institutions create a faculty staffing plan tied to enrollment and program growth, ensuring sufficiency proactively.
  • Credential Review at Hiring: Using a standardized process to verify faculty credentials and tested experience at hiring. Practice example: a checklist requiring academic credentials verification, reference checks for teaching experience, and teaching demonstrations for new faculty hires to ensure quality.
  • Faculty Development Programs: Having a dedicated Center for Teaching and Learning or similar unit that offers ongoing training (e.g. in instructional design, online pedagogy, or cultural competency in teaching) (Providing Evidence for the Criteria for Accreditation). Mentoring programs for new faculty and funding for faculty research or sabbaticals (Providing Evidence for the Criteria for Accreditation) also illustrate investment in faculty currency and quality.
  • Adjunct Integration: Including part-time and adjunct faculty in trainings and evaluations. For example, an institution might provide an orientation program for all new instructors (full-time or adjunct) (Providing Evidence for the Criteria for Accreditation), and ensure adjuncts have access to resources and are evaluated on a regular schedule.
  • Support Staff Professionalization: Encouraging student support staff to pursue relevant certifications or continuing education (for instance, academic advisors obtaining advising certifications). Keeping documentation of staff professional development plans and annual performance reviews shows a commitment to high-quality support services (Providing Evidence for the Criteria for Accreditation).

Self-Assessment Questions for 3.C

  • Adequacy of Faculty: Do we have enough faculty in each program to cover teaching, mentoring, and curriculum development needs? How do we determine the appropriate number of faculty, and what are our student-to-faculty ratios in key areas (Providing Evidence for the Criteria for Accreditation)?
  • Faculty Qualifications: Are all instructors in each department appropriately qualified (academic credentials or equivalent experience)? What process is in place to verify and monitor faculty qualifications (Providing Evidence for the Criteria for Accreditation)?
  • Evaluation and Improvement: How are faculty and staff evaluated, and how often? Do we use evaluation results to identify professional development needs and provide support (e.g., teaching workshops, staff training)?
  • Support Staffing: Do we have sufficient staff for critical student services (advising, tutoring, library, tech support)? For example, is the advising caseload reasonable, and are tutors available at needed times? If gaps exist, how are we addressing them?
  • *Faculty/Staff Support: What opportunities do faculty and staff have for professional development?** Do we offer resources like sabbaticals, conference travel funds, or in-house training to help them stay current in their fields and improve their skills?

Core Component 3.D: Support for Student Learning and Resources for Teaching

Revised Core Component 3.D (2025) – “The institution provides student support services that address the needs of its student populations, as well as the teaching resources and infrastructure necessary for student success.” (Providing Evidence for the Criteria for Accreditation) This component combines two critical aspects: (1) Student support services (academic, co-curricular, and administrative support that helps students succeed) and (2) Learning resources/infrastructure (the facilities, technology, and resources that faculty and students need for effective teaching and learning). Essentially, 3.D ensures that an institution not only offers programs (as in 3.A) but also provides the holistic support and environment for students to thrive and for instructors to teach effectively.

Integration note: The revised 3.D aligns with the old 3.D (2020) which stated “support for student learning and resources for effective teaching.” The subcomponents from 2020 included providing support services suited to student needs, preparatory/remedial instruction, appropriate academic advising, and adequate infrastructure such as libraries and labs. Those elements remain under this component. Institutions should demonstrate a comprehensive system of student support (from tutoring and advising to co-curricular engagement) and sufficient learning resources (technology, library, labs, etc.), with evidence of evaluating and improving these services.

Evidence to Demonstrate 3.D (Student Support and Resources)

  • Inventory of Support Services: A list or description of all major student support services available, such as tutoring/writing centers, disability services, academic advising, career services, financial aid advising, counseling, veteran’s services, child care, health services, clubs/organizations, etc. (Providing Evidence for the Criteria for Accreditation). Including this list (perhaps from a student handbook or website) shows the scope of support provided.
  • Usage and Effectiveness Data: Data or reports on student utilization of support services and any outcomes or improvements. For example, tutoring center usage statistics and impact on course success rates, or results from student satisfaction surveys (e.g. NSSE or CCSSE) that pertain to support services. Evidence of improvement initiatives (minutes or reports from committees that review student support) demonstrates responsiveness (Providing Evidence for the Criteria for Accreditation).
  • Academic Advising Process: Documentation of how academic advising is structured and delivered. This may include an advising policy or advising handbook, advisor training materials, and description of tools (degree audit systems) used (Providing Evidence for the Criteria for Accreditation). Evidence might be a flowchart of the advising process from enrollment to graduation or outcomes like increased retention after advising improvements.
  • Learning Infrastructure: Information about the physical and online infrastructure supporting learning. This includes library resources (collections, databases, interlibrary loan stats), IT resources (computer labs, Wi-Fi coverage, learning management systems), laboratories and specialized facilities for sciences or arts (Providing Evidence for the Criteria for Accreditation), and any learning spaces like study areas or innovation centers. Provide examples like descriptions of new lab installations, or an overview of instructional technology available to faculty and students.
  • First-Year and Enrichment Programs: If applicable, evidence of special programs to bolster student learning, such as a First-Year Experience (FYE) program, bridge programs, supplemental instruction, or honors programs (Providing Evidence for the Criteria for Accreditation). For instance, an FYE syllabus or participation data can show how the institution supports new student integration and skill-building.
  • Policies on Academic Integrity: A supportive learning environment also means clear expectations. Include evidence of plagiarism/academic integrity training or policies (e.g. orientation materials, honor codes), since these resources educate students and uphold learning quality.

Example Effective Practices for 3.D

  • Holistic Student Support Model: Adopting a “one-stop shop” or case-management approach where each student has an advisor or success coach tracking their progress. Documenting the effectiveness (like improved retention) of such a model is a strong practice example.
  • Proactive Advising and Early Alert: Using early alert systems to identify students who are struggling and proactively connect them with tutoring, advising, or counseling. For example, an institution might have a weekly report of at-risk students (based on grades or attendance) and a team that reaches out with support. The evidence of this practice could be the early alert protocol and summaries of interventions taken.
  • Learning Resource Upgrades: Regularly updating and expanding learning resources. Practice example: a college might invest in a new simulation lab for nursing or new digital library databases, and conduct training for faculty to integrate these into teaching. Keeping track of these upgrades and usage stats shows a commitment to providing modern resources.
  • Student Feedback Mechanisms: Implementing formal ways for students to give feedback on support services (surveys, student advisory boards) and then acting on that feedback. For instance, if students indicated library hours were insufficient, the institution extends hours – this change and its rationale could be noted in planning documents or student government minutes.
  • Co-Curricular Integration: Aligning co-curricular programs (student organizations, leadership programs, service learning) with academic goals to enrich student learning. For example, co-curricular transcripts or reflections that tie activities to learning outcomes can be used to illustrate that support for learning extends beyond the classroom.

Self-Assessment Questions for 3.D

  • Range of Services: Do we provide a comprehensive set of support services for our student demographics? Are there any student needs (academic or personal) that are not being adequately addressed through current services?
  • *Effectiveness: How do we know our support services are effective?** What data do we collect on tutoring, advising, career services, etc., and how do we use that information to improve these services?
  • Advising: Is our academic advising system effective and accessible? Do students receive timely and accurate guidance from entry to completion? How do we train and evaluate advisors?
  • Resource Adequacy: Do faculty and students have the resources they need (libraries, labs, technology) to succeed? Have we identified any gaps in infrastructure, and do we have plans to address them? (For example, are lab facilities sufficient for the number of students in science courses?)
  • Continuous Improvement: What processes exist to review and improve student support services regularly? Is there a committee or task force that examines student support outcomes? How are students involved in providing input on these services?

Core Component 3.E: Assessment of Student Learning

Revised Core Component 3.E (2025) – “The institution improves the quality of educational programs based on its assessment of student learning.” (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading) This component emphasizes the full assessment loop: gathering evidence of student learning outcomes, analyzing that evidence, and using it to enhance programs. It signifies a shift of the assessment focus from the old Criterion 4.B into Criterion 3. Institutions must show they have effective assessment processes and that they “close the loop” by implementing improvements informed by assessment results.

Integration note: This core component is new to Criterion 3 in 2025 (moved from old Criterion 4). Under the 2020 Criteria, Core Component 4.B was “The institution engages in ongoing assessment of student learning”. The revised Criteria move that concept here as 3.E (Assessment of Student Learning). The expectations remain the same: institutions need a systematic assessment process for academic and co-curricular learning, evidence of faculty involvement, and evidence that assessment findings lead to improvements. Essentially, what was good practice under old 4.B is now required under 3.E.

Evidence to Demonstrate 3.E (Assessment and Improvement)

  • Assessment Plan and Cycle: A documented assessment plan that outlines how the institution assesses student learning at course, program, and institution level, including co-curricular learning where relevant. This might include an assessment calendar/cycle (e.g. which outcomes are assessed when) and the structures in place (assessment committee charters, assessment office roles).
  • Stated Learning Outcomes: Clear statements of student learning outcomes for general education, each academic program, and co-curricular programs (if applicable). Having these outcomes compiled (perhaps in program review documents or assessment reports) is essential baseline evidence.
  • Evidence of Data Collection: Examples of assessment tools and data: rubrics, curriculum maps linking outcomes to courses, samples of student work assessed, results from standardized tests or licensure exams, survey results of student learning, etc. Show that data is collected regularly (e.g. annual assessment reports by departments) (Providing Evidence for the Criteria for Accreditation).
  • Use of Results (“Closing the Loop”): Documentation that assessment results are reviewed and lead to action. For example, meeting minutes or reports demonstrating faculty reviewed assessment data and decided on changes (Providing Evidence for the Criteria for Accreditation), or a summary of improvements made (curriculum changes, pedagogy changes, resource allocation) based on assessment findings (Providing Evidence for the Criteria for Accreditation). This could also include updates to outcomes or benchmarks after reviewing data.
  • Faculty Involvement and Culture: Evidence that faculty and staff are actively involved in assessment. This might be faculty senate or assessment committee minutes where assessment is discussed, records of assessment training workshops attended by faculty, or policies requiring faculty participation in assessment activities. If applicable, include co-curricular staff involvement for areas like Student Affairs assessing their learning outcomes.
  • Continuous Improvement Examples: Concrete examples of improvements made in programs due to assessment. For instance, a department might have revised a course sequence after finding students were underperforming on a certain outcome – provide that example in narrative or report form. Another example: improvements to co-curricular programs (like an orientation program revamped after assessing first-year student feedback).

Example Effective Practices for 3.E

  • Embedded Assessment Processes: Integrating assessment into faculty routine. Practice example: some institutions tie annual program assessment reports into the program review or budgeting process, ensuring it’s not optional. By requiring each academic program to submit an outcomes assessment report each year (with data and action plans), the institution guarantees continuous monitoring.
  • Broad Definition of Learning: Assessing not just academic learning outcomes but also co-curricular learning (leadership skills from student government, communication skills from service learning, etc.). A mature practice is to have learning outcomes for student services programs and to assess those via rubrics or surveys, demonstrating improvement actions in areas like student life or advising.
  • Data Transparency and Accessibility: Using a centralized system or dashboard where assessment data (like student achievement on learning outcomes) is stored and accessible to faculty and staff. This can help departments benchmark and share best practices. For example, an institution might use a software platform where all programs upload assessment results and can view each other’s reports.
  • Closing the Loop Documentation: Maintaining a “change log” or portfolio of improvements. Some institutions keep a record of all curriculum or pedagogical changes made in response to assessment findings (e.g., a table listing issue identified, action taken, outcome after change). This makes it easy to demonstrate a pattern of improvement over time.
  • Faculty Development for Assessment: Providing training on assessment methods (such as designing rubrics or interpreting data) and recognizing good assessment practice (awards or incentives for departments that effectively use assessment to improve). This fosters a culture where assessment is seen as a tool for enhancement rather than a chore.

Self-Assessment Questions for 3.E

  • Process: Do we have a well-defined process for assessing student learning outcomes at all levels (course, program, general education, co-curricular)? Who is responsible for oversight and coordination of assessment, and is there broad participation?
  • Data to Improvement: Can we cite specific examples where assessment results led to improvements in curriculum, instruction, or support services? How do we ensure that identified issues are followed up with action (and later re-assessed to check for improvement)?
  • Coverage: Are we assessing the right things? Do our assessments adequately measure our stated learning outcomes, including higher-order skills? Are we perhaps over-assessing some areas and under-assessing others – how do we balance meaningful assessment without overburdening faculty?
  • Closing the Loop Culture: How engaged are faculty and staff in the assessment process? Is assessment an integral part of our institutional culture (e.g., discussed in faculty meetings, included in program review, considered in resource allocation)? What support do we provide to help faculty effectively assess and improve student learning?
  • Results: What do the results of our most recent assessments tell us about our students’ strengths and weaknesses? Have we set performance benchmarks or targets, and how do our students measure against them? If there are gaps, what improvement plans are in place?

Core Component 3.F: Program Review

Revised Core Component 3.F (2025) – “The institution improves its curriculum based on periodic program review.” (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading) Core component 3.F ensures that institutions engage in systematic program review processes to evaluate the quality and effectiveness of academic programs, and that the findings of these reviews lead to curricular improvements. Program review typically involves a comprehensive evaluation of an academic program on a rotating cycle (e.g., every 5 years), covering aspects such as curriculum relevance, student outcomes, enrollment, faculty, resources, and sometimes external benchmarking.

Integration note: This component is new to Criterion 3 in 2025, derived from the old Criterion 4.A (2020) which focused on “quality of educational offerings” including program review and related academic oversight. The crosswalk shows old 4.A now corresponds to 3.F (Program Review). Under old 4.A, institutions had to maintain regular program reviews, evaluate credit (including transfer and prior learning credit), ensure course rigor and dual credit parity, maintain specialized accreditations, and evaluate graduate success. In the revised criteria, the emphasis in 3.F is on the program review process itself and curricular improvements, while some specific elements of old 4.A (like transfer credit policies) might be treated under assumed practices or as evidence within program review. Institutions should thus demonstrate a formal program review system and show how it results in enhancements.

Evidence to Demonstrate 3.F (Program Review and Improvement)

  • Program Review Policies and Schedule: A policy or handbook for program review that outlines the purpose, criteria, and timeline for reviewing programs. Include the schedule of reviews (e.g., which programs were reviewed in the past year and which are upcoming) and the responsible committees or offices. This shows that program review is regular and structured.
  • Sample Program Review Reports: One or two recent program review self-study reports and corresponding findings/recommendations. These should demonstrate the depth of analysis (covering curriculum, assessment results, enrollment trends, etc.) and include improvement plans. If possible, also provide evidence of follow-up (e.g., action plans or memos from academic affairs approving certain changes based on the review).
  • Curricular Changes from Reviews: Documentation of specific curricular improvements made as a result of program review. For instance, meeting minutes or memos that show a program was revised or continued/discontinued after review findings. Or a summary of changes (new courses added, outdated courses removed, pedagogical changes, etc.) implemented following a review.
  • Credit Evaluation Policies: Because old 4.A included evaluating all credit that the institution transcripts (transfer credit, prior learning, etc.), include evidence that the institution has robust transfer credit and prior learning assessment policies (Providing Evidence for the Criteria for Accreditation). This can be in the form of published transfer guides, credit equivalency lists, or a description of how transfer credits are reviewed for quality. While this might be considered an assumed practice now, it remains relevant to program quality assurance.
  • Dual Credit and Accreditation: Evidence that special program modalities are held to standard. For example, if the institution offers dual-credit (high school) courses, provide a statement or data showing those courses are equivalent in outcomes and rigor to the on-campus versions. Also, a list of specialized accreditations for programs and their status – this shows an external validation of program quality and is often part of program review considerations.
  • Outcomes of Graduates: Part of understanding program effectiveness is knowing how graduates fare. Include any evidence on student success after program completion, such as job placement rates, graduate school admission rates, licensure exam pass rates for relevant programs. Program review documents often include these; highlighting them shows the institution evaluates whether programs meet their claimed goals.

Example Effective Practices for 3.F

  • Comprehensive Program Review Process: Having a program review process that is broad in scope and involves multiple stakeholders. Practice example: an institution’s program review might require input from faculty, students, alumni, and an external reviewer from the discipline. The resulting report covers curriculum, student learning outcomes, enrollment and retention data, faculty qualifications, resources, and external comparators. This comprehensive approach, with external benchmarking, often leads to stronger improvement plans.
  • Governance Oversight: Using governance bodies (faculty senate or academic council) to ensure program review recommendations are implemented. For example, after a program review, a summary with recommendations could be presented to the academic affairs committee and progress tracked annually. This ensures accountability for making improvements.
  • Integration with Planning: Aligning program review with strategic planning and budgeting. If program reviews identify needs (e.g., new equipment, more faculty), those requests feed into budget decisions. Likewise, strategic initiatives (like focusing on student success outcomes) are checked via program review results. This alignment makes program review meaningful and action-oriented.
  • Sunset or Revitalization Decisions: Utilizing program review data to make tough decisions about underperforming programs (e.g., program suspension or major redesign). As an example, one institution’s program review discovered “deficiencies in assessment” in several programs and recommended discontinuation of two programs; they also merged annual assessment reporting into the 5-year review cycle to strengthen oversight. This demonstrates closing the loop not just on curriculum changes, but on program viability decisions as well.
  • Transparency and Communication: Publishing a summary of program review findings (non-confidential aspects) to the campus community. This can take the form of an annual academic program review report or dashboard indicating which programs are doing well and which are improving. Transparency builds trust in the process and highlights continuous improvement.

Self-Assessment Questions for 3.F

  • Process and Frequency: Do we conduct program reviews regularly for all academic programs? What is the cycle (e.g., every 5 years) and have we been adhering to it? If any programs have been skipped or delayed, what is the plan to catch up?
  • Review Content: What elements do our program reviews examine? Do they cover curriculum quality, student learning outcomes, enrollment and retention, faculty sufficiency, resources, and external benchmarks? Are these reviews rigorous and evidence-based?
  • Follow-up: How are program review results used? After a review, who sees the results and who is responsible for ensuring recommended improvements are made? Can we give examples of improvements or decisions (e.g., expanding a program, discontinuing a concentration, updating curriculum) that came directly from program review findings?
  • Credit and Rigor Oversight: How do we ensure that credits accepted (transfer, dual credit, prior learning) are of sufficient quality? Is this evaluated within program reviews or elsewhere? For instance, do program faculty review the success of transfer students or dual-credit students in subsequent courses to verify equivalency?
  • Outcome Alignment: Do our program reviews consider student success outcomes (graduation rates, placement, licensure) for each program? If a program’s graduates consistently underperform on an external metric, how is that addressed through the review and improvement process?

Core Component 3.G: Student Success Outcomes

Revised Core Component 3.G (2025) – “The institution’s student success outcomes demonstrate continuous improvement, taking into account the student populations it serves and benchmarks that reference peer institutions.” (Providing Evidence for the Criteria for Accreditation) This component centers on student success metrics – such as retention, persistence, completion, graduation rates, job placement, etc. – and expects institutions to not only track these outcomes but also to show they are continuously improving or at least taking improvement actions. Importantly, it calls for context: comparing outcomes to peer benchmarks and being mindful of the institution’s own student demographics when setting goals or evaluating success.

Integration note: Core Component 3.G is new in 2025 and originates from the old Criterion 4.C (2020) on retention, persistence, and completion rates. The crosswalk maps old 4.C to new 3.G (Student Success Outcomes). Under old 4.C, institutions had to have defined goals for retention/completion, collect and analyze data, use it for improvement, and ensure their methods were sound. All those expectations are carried into 3.G: setting ambitious yet realistic targets for student success, measuring outcomes by different student groups, and making changes to improve those outcomes. New in 3.G is an explicit mention of benchmarking against peers, emphasizing external context for what “success” means.

Evidence to Demonstrate 3.G (Student Success and Improvement)

  • Defined Student Success Metrics and Goals: Documentation of the institution’s key student success indicators (e.g., first-to-second year retention, graduation rates for 150% time, course completion rates, job placement rates, etc.) along with official goals or targets for those metrics. This could be an excerpt from a strategic plan or a performance report that lists current rates and future goals, noting they are appropriate to the mission and student population.
  • Data on Retention and Completion: Internal reports that collect and analyze data on student persistence and completion. For example, a report on cohort retention trends, disaggregated by demographics (race, first-gen status, etc.). Include evidence that this analysis is systematic – perhaps participation in consortia like the Student Success KPI exchange, or using tools like the National Student Clearinghouse for tracking completions.
  • Improvement Initiatives: Evidence of actions taken to improve student success rates based on data. This might include minutes from a student success committee meeting where strategies are discussed, or descriptions of initiatives like new advising programs, tutoring expansions, early alert systems, curriculum revisions aimed at improving gateway course pass rates, etc. Tie these initiatives explicitly to identified issues in the data (e.g., “we started a math summer bridge because of low math completion rates”).
  • Benchmarking Information: Any studies or data that compare the institution’s outcomes to peer institutions or national norms (Providing Evidence for the Criteria for Accreditation). For example, benchmark reports showing how your graduation rate compares to similar colleges, or use of national survey benchmarks (NSSE engagement indicators, for instance). Also, if the institution is part of state or national initiatives (Complete College America, Achieving the Dream, etc.), include evidence of participation and use of those benchmark frameworks.
  • Alumni Success Data: Beyond graduation, evidence of student success after leaving the institution: job placement data, graduate school acceptance rates, licensure exam pass rates, alumni surveys with outcome information (Providing Evidence for the Criteria for Accreditation). These demonstrate whether students are succeeding in their goals post-completion and can inform program improvements (for instance, if employer surveys suggest a skill gap, that feeds back into curriculum).
  • Published Reports and Accountability: Copies or links to any public-facing student outcome reports (such as an annual fact book or an outcomes dashboard on the college website). This not only evidences that data exists, but also that the institution is transparent and accountable about its student success outcomes.

Example Effective Practices for 3.G

  • Setting Ambitious Goals: Establishing stretch targets for improvement (e.g., increase retention by 5% in five years) that are grounded in data. A good practice is involving faculty and staff in setting these goals so there is broad commitment. Document this involvement through meeting notes or planning retreats.
  • Data Disaggregation and Equity Focus: Regularly breaking down success metrics by subpopulations to identify equity gaps. For example, an institution might track retention for first-generation students vs. others and find a gap, then launch a First-Gen support initiative. The evidence of practice is the disaggregated data analysis and the targeted program created in response.
  • Consortium Benchmarking: Joining a data-sharing consortium (like the National Community College Benchmark Project or a state higher-ed benchmark group) to compare performance. Practice example: a college participates in the Student Success Scorecard project and uses those peer comparisons to identify areas for improvement (e.g., their transfer rate might lag peers, prompting an action plan).
  • Continuous Improvement Projects: Utilizing methodologies like continuous improvement cycles (Plan-Do-Study-Act) specifically for student success interventions. For instance, running a pilot program for improving online course success, measuring outcomes, and scaling it up if successful. Documenting these pilots and their results shows a commitment to evidence-based improvement.
  • Integration with Planning and Budget: Tying student success goals to the strategic plan and allocating resources accordingly. If improving completion rates is a strategic priority, effective practice is to fund initiatives such as additional advisors or tutoring resources. The budget documents or strategic plan progress reports can serve as evidence that resources follow intentions and improvements are monitored.

Self-Assessment Questions for 3.G

  • Metrics and Goals: What are our key student success metrics (retention, graduation, etc.), and have we set appropriate goals for them? Are these goals known throughout the institution and aligned with our mission and student demographics (e.g., if we serve non-traditional students, do we account for part-time completion rates, etc.)?
  • Data Analysis: How do we collect and analyze data on student success? Do we regularly examine who is not persisting or graduating and why? Are there clear reports that decision-makers review, and do we segment the data by program or student sub-group to pinpoint challenges?
  • Actions and Impact: What initiatives have we implemented to improve student success, and how do we know if they are working? Can we link improvements in our metrics to specific changes (for example, an increase in retention after introducing a new first-year seminar)?
  • Benchmarking: How do our student outcomes compare to those of similar institutions? If we are below benchmarks in certain areas, what are we doing to learn from others or adopt best practices? Conversely, if we exceed benchmarks, how are we maintaining that edge or sharing those practices?
  • Institutional Commitment: Is improving student success a visible priority in our planning and resource allocation? For instance, do our strategic plan or accreditation self-study explicitly address retention and completion? Are faculty and staff broadly engaged in student success efforts (through committees, task forces, etc.)?

Conclusion: The above guide is organized according to HLC’s revised 2025 Criterion 3 structure, covering Core Components 3.A through 3.G. Each section provided evidence ideas (drawn from HLC’s Providing Evidence for the Criteria resource (Providing Evidence for the Criteria for Accreditation) and prior Criterion 3/4 guidance), example practices from 2020 documentation, and self-reflective questions. Institutions preparing for accreditation should use this as a roadmap to gather documentation and craft their assurance arguments. By addressing each Core Component with solid evidence and honest self-analysis, an institution can demonstrate that it meets Criterion 3: Teaching and Learning for Student Success – thereby showing that it provides quality education and continuously improves its academic programs, resources, and support in service of student learning (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading).

Sources and HLC Resources:

  • HLC Criteria for Accreditation (2025) – Criterion 3: Teaching and Learning for Student Success (Revised Criteria for Accreditation and Assumed Practices: Policy Changes Adopted on Second Reading)
  • HLC Providing Evidence for the Criteria (2020 & 2025) – guidelines and examples of evidence for each core component (Providing Evidence for the Criteria for Accreditation) (Providing Evidence for the Criteria for Accreditation)
  • HLC Policy documents and guidelines on assessment and improvement (see HLC Policy CRRT.B.10.010 and related materials)
Back to top