How to evaluate Open-Ended responses on Common Assessments

Modified on Wed, 1 Apr at 12:54 PM

TABLE OF CONTENTS


When a Common Assessment includes Open-Ended questions — such as text, audio, video, or drawing responses — those responses require manual scoring by the teacher before they factor into accuracy calculations. Until evaluated, the affected students are excluded from report aggregates so that unscored responses don’t distort the data.

This article walks you through the evaluation workflow: where to find pending evaluations, how to score responses, and how to use AI-suggested scores when available.

This article is for: Teachers (Educators) who proctor Common Assessments.


Identifying pending evaluations

When Open-Ended responses are waiting for your review, a yellow warning banner appears at the top of your Teacher Class Report:

"X participants have pending evaluations and are excluded from relevant aggregates."

You’ll also notice:

  • The Accuracy metric in the report header may show 0% or a lower-than-expected value (with a warning icon), because unevaluated students are excluded from the calculation.

  • On the Participants tab, affected students show "Pending evaluation" instead of an accuracy percentage.

  • On the Questions tab, Open-Ended questions display an "Evaluate" button.


Opening the evaluation panel

You can start evaluating in several ways:

  • Click "View details" on the yellow pending evaluation banner (the one that reads "X participants have pending evaluations and are excluded from relevant aggregates").

  • Click the "Evaluate" button next to any student on the Participants tab.

  • Click the "Evaluate" button on any Open-Ended question card on the Questions tab.

Any of these opens the "Evaluate participant responses" panel — a modal where all evaluation happens.


The evaluation panel

The evaluation panel is where you review and score all Open-Ended responses. It is organized into three areas:

Left side — Question navigation

A vertical list of all questions in the assessment (Q1, Q2, Q3, etc.). Questions with Open-Ended responses that need evaluation are marked with an orange dot. Click any question to jump to it. Once all responses for a question are scored, the orange dot disappears.

Center — Question details

When you select a question, the center area shows:

  • Question number, type, time allowed, and point value (e.g., "6. OPEN 3 mins 1 pts").

  • The question text. For audio response questions, the question text is followed by "Participants record a response."

  • "Evaluate responses using AI: ON" or "OFF" — indicates whether AI-assisted scoring is available for this question.

  • Tags — The curriculum standards tagged to this question (e.g., "TEKS.3.1B"). Tags appear only when the question has been tagged with standards.

Right side — Student responses

An "All Responses (X)" header shows the total number of student responses for the selected question.

Each student’s response is displayed as a card showing:

  • Student name. If the student has multiple attempts, each attempt appears as a separate card with an attempt identifier (e.g., "StudentName:1", "StudentName:2").

  • The student’s response — text content, audio player, or other format depending on the question type. Long text responses include a "see more" link to expand the full content.

  • Score field — An editable field showing the score range (e.g., "0–1.0 of 1.0"). Enter a score within the allowed range. An amber warning icon (⚠️) appears next to responses that have not been scored yet.

If the question has AI evaluation enabled, each response also shows an AI analysis section (covered below).

Saving

Scores save automatically as you enter them — you’ll see "Saving..." briefly in the top-right corner, followed by "All changes saved" once complete. Close the panel using the X button when you’re done.


Scoring responses manually

For each Open-Ended response:

  • Read the student’s response. Click "see more" if the response is truncated.

  • Enter a score in the score field.

  • Move to the next question using the question navigation on the left.

For audio responses: The response card displays an audio player with a play button. Click it to listen to the student’s recording before scoring.

For students with multiple attempts: Each attempt appears as a separate card. Score all attempts — the system uses the student’s best attempt for accuracy calculations once all are evaluated.


Using AI-suggested scores

When "Evaluate responses using AI: ON" appears for a question, each student response includes an AI analysis section:

  • Suggested Score — The AI’s recommended score (e.g., "Suggested Score: 0/1").

  • ✓ Apply — Click to accept the suggestion and auto-fill the score field.

  • Explanation — The AI provides a detailed analysis of the student’s response, explaining why it assigned that score.

To use AI evaluation:

  • Review the AI analysis for each response.

  • If you agree: Click "✓ Apply" to auto-fill the score.

  • If you disagree: Type your own score directly into the score field.

You always have the final say

AI evaluation is a starting point. The AI’s explanation helps you understand its reasoning, but the decision is yours.


When AI is OFF: No AI analysis appears. Score the response entirely on your own.


How pending evaluations affect your metrics

Until all Open-Ended responses are evaluated, your report metrics are affected as follows:

  • Assessment accuracy: Only fully evaluated participants are included. If a student has any unevaluated response, they are excluded from the overall accuracy calculation.

  • Standard accuracy: A student is excluded from a standard’s accuracy until all of their responses to items within that standard are evaluated.

  • Item accuracy: A student is excluded from an item’s accuracy until their response to that item is evaluated.

  • Multiple attempts: If a student has multiple attempts, the best attempt cannot be determined until all responses across all attempts are fully evaluated. Until then, the student is excluded from all accuracy aggregates.

Note: Your report’s accuracy and performance band distribution may shift as you complete evaluations. Check the "Last updated" timestamp in the report header before presenting data in meetings.



Step-by-step walkthrough

  • Open your class report from the Common Assessments page.

  • Look for the yellow banner — "X participants have pending evaluations and are excluded from relevant aggregates."

  • Click "View details" on the banner (or click "Evaluate" on the Participants or Questions tab).

  • The evaluation panel opens. Questions with orange dots on the left need evaluation.

  • Click a question with an orange dot to see all student responses.

  • For each response: Read it, then enter a score — or review the AI analysis and click "✓ Apply" if AI is available.

  • Work through all questions with orange dots. The dot disappears from each question once all its responses are scored.

  • Close the panel. Your scores have been saved automatically. The report will refresh with updated accuracy metrics.

Note: You don’t have to evaluate all responses in one session. Scores save automatically as you go. You can close the panel and return later to continue.



Additional notes

  • Changing a score: Reopen the evaluation panel, navigate to the question and student, and update the score.

  • Multiple attempts: Score all attempts for each student. The system uses the best attempt for accuracy once all are evaluated.

  • AI ON vs. OFF: AI evaluation is configured per question by the assessment creator. Some questions may have it enabled while others require manual scoring.

  • 0% accuracy despite completed students: If all answered questions include Open-Ended items that haven’t been scored, accuracy shows 0% because those students are excluded. Once you evaluate the responses, accuracy updates to reflect actual scores.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article