Educational Science in Action: Evaluating an Assessment Instrument

A Renewal of Education Forums Active Conversations Educational Science in Action: Evaluating an Assessment Instrument

This topic contains 2 replies, has 2 voices, and was last updated by  Paul Zachos 3 months, 1 week ago.

  • Author
    Posts
  • #1142

    Monica De Tuya
    Moderator

    It was a historic event: two ACASE associates completed the Cubes & Liquids assessment activity for the first time as “students”! This has opened up many unique opportunities for reflecting on our work and using the knowledge that results from such conversations to improve the process we use for administering, scoring and in general managing the Cubes & Liquids activity. In fact, our recent participation in rating these two new student responses was the first time we used the revised learning goals, levels of attainment and scoring guide. Hence, there is much we can learn. The purpose of this Forum thread is to evaluate the Cubes & Liquids process. We will consider what worked, what didn’t work, and opportunities for improvement in all aspects of the rating process, including the documents themselves (i.e. the student response form and the scoring guide); the rating instrument itself (i.e. the assessment event in the AIS); and the activity itself (i.e. determining ratings; entering ratings in the AIS). I imagine that our conversation here in the Forum will follow a semi-structured format: I will provide a prompt and ask you all to reflect on and post responses to it, but also to respond to one another. I will attempt to facilitate and organize that discussion by drawing attention to particular posts, making connections, reflecting on common themes, and so on. So let’s get started. First we’ll get a general, gut reaction to the activity: please respond to this post by sharing your thoughts, on a general level, of what this  C&L rating experience was like for you. What were your expectations for this rating experience and were those expectations met? Why or why not?

    Attachments:
    You must be logged in to view attached files.
  • #1169

    Monica De Tuya
    Moderator

    Hi All,

    Here are my thoughts with regards to the above prompt: “What were your expectations for this rating experience and were those expectations met? Why or why not?”

    This was the first time I administered the activity as an instructor, so therefore it was the first time I had rated the activity from that perspective as well. Also, it had been several years since I had worked with the process and the tools of this activity. My expectations were that this would be a complex and time-consuming activity, both administering and rating. My expectations were met! I found that preparing for administering, administering and rating all took much focus, energy, and intellectual exertion. Was this because of the tools or the process, or was it because I was coming to the activity after a long time away, and my personal process? I think it is the latter.

    This is not to say that it was a negative experience – to the contrary, I found the intellectual exertion exciting and productive!

    Looking forward to hearing your thoughts on your expectations for the activity. Then, as this conversation progresses, we are going to look more closely and specifically at the actual tools and processes.

    Best,
    Monica

  • #1181

    Paul Zachos
    Moderator

    Cubes & Liquids (C&L) may be used by teachers to support their daily practice. It is also sufficiently refined that it may be used as a research instrument. For an example of the latter, we are now using C&L to develop concepts and practices relevant to validity and reliability of educational measures working at the level of practical learning goals and outcomes. Teachers may also be seen as engaged in research when they are systematically building or refining assessment instruments.

    The questions associated with daily practice vs. research are distinct.

    For daily practice the primary question is, “How well are my students performing with regard to some learning goal?”

    For research the questions are more of the nature of, “How confident am I in the results of this assessment? How valid are they, how dependable? What is the likelihood that the judgment made (i.e. of level of attainment of the learning goal) is accurate?”

    In our sister conversation [Interpreting Educational Data] a request was made to see the comments made by raters regarding their judgments of level of attainment of a learning goal. This is part of evaluating the accuracy of judgments. For research purposes then it makes sense for raters to be generous in their comments (i.e. in giving reasons for their judgments) and for all stakeholders in the process to be attentive to those reasons.

You must be logged in to reply to this topic.