top of page

Click icon to open PDF of assessment

Assessment Critique

Critique of My Principles of Business Assessment 

In 2024, I created an end-of-term assessment for my Form 4 Principles of Business class. At the time, I had not yet been introduced to the assessment standards taught in the PGDipEd programme. At this stage, critiquing the assessment through the lens of assessment design, validity, and reliability drawing on the theoretical principles examined earlier has highlighted both strengths and key gaps that will guide how I plan future assessments.


Under the standard of assessment design, some aspects of my 2024 classroom test did conform to expectations. It included a mix of question types targeting lower- and higher-order thinking, such as defining terms, comparing concepts, and applying knowledge to business scenarios. These reflected an effort to assess both recall and reasoning. However, the test did not fully conform to design standards due to the absence of a table of specifications (TOS). Without this tool, I was unable to confirm whether questions were evenly distributed across syllabus objectives and cognitive levels. Upon review, I found that about 60% focused on basic recall, while few assessed deeper application or analysis.

​

The assessment also did not conform to principles of developmental appropriateness. These were Form 4 students in their first term of POB, and some higher-order tasks may have exceeded their readiness. Including multiple-choice or scaffolded items would have better supported their transition into the subject.

​

I also realize now that the way some questions were worded made the test harder for students to understand. Although the content was from the syllabus, a few instructions were not as clear as they should have been. This may have confused students and affected how well they could show what they actually knew.

​

From an inclusivity standpoint, the design did not reflect strategies for diverse learners. Features like clearer formatting, chunked instructions, or visual cues were not included. Moving forward, I plan to use a TOS, ensure better cognitive balance, and apply simple inclusive strategies to create more equitable assessments.

​

When I reflect on the standards of validity and reliability, I now recognize that a test can only be useful if it consistently measures what it is intended to. In terms of reliability, my assessment did not fully conform to this standard. While the questions were curriculum-based, I did not provide specific guidance for scoring extended responses. There was no structured rubric or marking scheme for open-ended items. Without this, different teachers may interpret student answers differently, leading to inconsistent scores. This lack of scoring consistency makes the test less reliable and reduces fairness in evaluating student performance.

On the other hand, the test did conform to reliability in terms of using familiar question formats and consistent structure throughout. This supported student comfort and reduced variation in how students approached the task, which contributes to a more reliable testing experience.

​

Regarding validity, the assessment partially conformed to the standard. The questions were aligned with syllabus topics and reflected content taught in class. However, some items did not conform to validity expectations because they were too broad, vague, or poorly worded. For example, a prompt that asked students to “discuss” without giving context or direction may not accurately measure their understanding. In such cases, even well-prepared students could perform poorly due to unclear expectations.

This reflection has made me realize that strong assessments must go beyond content coverage. They must also provide clear instructions, structured scoring, and valid alignment with intended outcomes so that all students have a fair and equal opportunity to show what they know. â€‹
 

Suggestions for Improvement


To address areas that did not conform to the standards of assessment design and validity, I plan to make two key changes moving forward.

🎯 First, to improve assessment design, I will begin using a table of specifications (TOS) during the planning process. This will help ensure that questions are distributed across syllabus objectives and cognitive levels. It will also support better balance between lower- and higher-order thinking, aligned with students’ developmental stage.

​

​

🎯 Second, to improve validity, I will revise how extended response questions are worded. Instead of vague prompts like “discuss,” I will provide clear instructions and guiding cues. This will ensure students understand what is required and allow them to demonstrate their knowledge more accurately. 

bottom of page