November 18, 2022 at 5:02 pm #1470
From the Journals of Practical Educational Science
Reflections on Erik Gustafson’s question at our December Institute Meeting
Our meeting began with Erik Gustafson posing a technical question about assessment. I feel that my answer was not as clear and cogent as it could have been so I would like to try again.
The question, I believe was, How do you rate student attainment when there are several opportunities to assess a student’s attainment of that goal, but the student has only demonstrated attainment on one of those opportunities (I hope I got that right)? Questions of this kind arise often and so I feel it is worth our attention to consider the issues involved.
Such a question first arose for Jason and I when we saw that students were missing opportunities to demonstrate their attainment of a particular learning goal in those sections of the Deer Assessment that we had designed for that purpose. Looking in those places one would conclude that there was no evidence of attainment. But then we found that in the students’ responses to other portions of the assessment that they had indeed attained the learning goal, and that it was our way of assessing attainment that had not been successful. So now we recommend looking in the place where evidence of attainment is expected to appear but to not neglect the opportunity to find such evidence in other places as well. In fact, it may be outside of the formal assessment activity itself that evidence will appear that the student has attained a particular learning goal. The use of narrative rather than short answer response heightens the possibility that such evidence will be found.
I know that during the summer workshops I presented the idea of having a scale which counted the number of times a student demonstrated attainment of a goal. For example, a score of ‘1’ for a single occurrence, a score of ‘2’ for two or multiple occurrences. But be careful! The score of ‘2’ or more, does not represent a higher attainment, it simply increases our confidence that the student has attained the learning goal because we have multiple sources of evidence of attainment. This is indeed a kind of reliability.
What if, a student shows evidence of attainment in one response and the opposite in another response on the same assessment. When this is the case, I always rate the response to indicate that the student has not attained the learning goal. Why? Because we are not giving the student a grade! There is no reason to give the student the benefit of the doubt. Indeed there is a benefit to identifying the possibility that the learning goal has not been attained and thereby indicating that further instruction may continue to be productive.
In Cubes and Liquids we provide a section at the end of the student response form where students can indicate ways in which their knowledge or understanding has changed as a result of the assessment. This provides an opportunity to see if their level of attainment on a learning goal, say, Density of Solid Objects, has risen above that of earlier responses.
The human capabilities that we are assessing are usually not directly observable. Assessment is our method for eliciting observables that will give us a basis for making inferences about attainment of these invisible human qualities. The assessment instruments that we are creating are always imperfect instruments to assess human capabilities. There are many reasons why our assessments can only be successful to a certain extent. And yet assessment of practical learning goals still provide the best, the most central, the most fundamental information for educational decision making — information about what was and what was not learned.
January 6, 2023 at 4:14 pm #1487
January 6, 2023 at 4:17 pm #1488Paul ZachosModerator
Follow up text comment
January 6, 2023 at 4:20 pm #1489
Test comment bold
- You must be logged in to reply to this topic.