Using Data to Improve Student Learning
“Because there is no perfectly accurate assessment tool or strategy, institutions should use multiple kinds of measures to assess goal achievement. Assessments may be quantitative or qualitative and developed locally or by an external organization. Assessment tools and strategies should clearly relate to the goals they are assessing…they should not merely be anecdotal information nor collections of information that happen to be on hand. Strategies to assess student learning should include direct, clear, visible and convincing evidence, rather than solely indirect evidence of student learning such as surveys and focus groups.” (Handbook for Periodic Review Reports, 2007, P. 24)
Definitions
The assessment plan specifies what data are to be collected and why.
Implementation is the actual collection of the data.
The assessment report is a description of the way that the assessment data once collected (results of surveys, tests, projects) were used to make changes.
Direct evidence is student-generated work such as tests, reports, papers and projects, for example.
Indirect evidence includes surveys and focus groups.
Student learning is specific and clear. It means that objectives and outcomes already identified by the department have been met.
Questions to Help Faculty Write Assessment Reports
What forms of direct evidence and indirect evidence demonstrate student learning? How do they demonstrate that learning has taken place?
What forms of direct evidence and indirect evidence suggest to the faculty that changes should be made in the program, major or curriculum? Have changes been made already? Based on what data or rationales? Have the changes worked?
(Changes can include: a new minor, program, or major, the revision of a course sequence, adding prerequisite courses, altering course assignments or new teaching and learning assignments and revising evaluation methods.)