Competency-Based Assessment (Not Just Content): A Practical Guide
Competency based assessment measures what students can actually do, not just what they remember. Here is how to design tasks that show real learning.
Draft My Lesson

You have probably had this student. The one who scored an A on the unit test, named every cause of the American Revolution, listed the dates, recited the Declaration. Then, three weeks later, you asked them to compare the Revolution to another social transformation, and they froze. Not because they did not know the content. Because nobody had ever asked them to use it.
This gap between knowing and doing is the single most uncomfortable truth in classroom assessment. The test said the student had learned. The transfer task said something else. And if you have stood at the front of a classroom, you have felt this discrepancy enough times to wonder whether your assessments are measuring the right thing.
Most modern curricula already point in a better direction. Common Core, the Next Generation Science Standards, the UK National Curriculum, the Australian Curriculum, and frameworks across Canada and New Zealand all use the language of competencies, skills, and twenty-first-century capabilities. But when it comes time to assess, many of us still default to multiple-choice and short-answer formats that reward memorization. This guide is about closing that gap with competency based assessment that is realistic for a working teacher.
Content vs competency: the difference that matters
Content knowledge is the raw material of a discipline. Competency is what a student can do with it.
Content sounds like this: "What year did the French Revolution start?" or "Define photosynthesis" or "What is the formula for the area of a triangle?" These questions have one right answer. They are quick to grade. They tell you whether a fact has been stored in memory.
Competency sounds different. It sounds like: "Analyze the causes of a major social transformation and argue which one was most decisive." Or: "Design an experiment that would let you measure whether a houseplant is photosynthesizing." Or: "Plan the layout of a community garden in a triangular plot and justify your design." These questions have many valid answers. They are slower to grade. And they tell you something a fact-recall question never can: whether the student can actually use what they have learned.
The distinction is not that content is bad. You cannot analyze the French Revolution without knowing when it happened and who Robespierre was. Competencies are built on top of content, not instead of it. The problem is when assessment stops at the content layer and never asks whether the student can climb to the next floor.
A useful test: if a student can pass your assessment by memorizing a study guide the night before, you are measuring content. If they need to think on their feet, combine ideas, and make defensible choices, you are measuring competency.

Anatomy of a competency task
A well-designed competency task has four ingredients. Miss one, and you are usually back to a content task wearing a costume.
Authentic context. The task is anchored in a real or realistic situation. Not "calculate the area," but "the school is repainting the gym and needs to know how much paint to buy." Authenticity does not mean the scenario must be true. It means it must be plausible, and it must give students a reason to care beyond the grade.
Combination of multiple knowledge areas. The student cannot solve the task by retrieving one isolated fact. They have to weave together several things they have learned, possibly from different units or even different subjects. A competency task in middle-school science might require math, writing, and scientific reasoning at the same time.
Multiple valid solutions. There is more than one defensible answer. Two students can reach different conclusions, both well-supported, and both deserve credit. This is what makes competency tasks feel uncomfortable to grade at first. It is also what makes them honest. Real-world problems rarely have a single answer key.
Required justification. The student must explain why they made the choices they did. Showing the work, defending the reasoning, citing evidence. This is where you find out whether they actually understand or whether they got lucky. Justification is the X-ray that lets you see inside the answer.
If you want a deeper look at how this fits alongside formative checks, exit tickets, and quizzes, you may find our overview of the assessment types every teacher should know helpful as a companion piece.
4 examples by level and subject
Here are four concrete tasks across grade bands and subjects. Each one is designed around the four ingredients above.
Elementary, English Language Arts (Grade 3-4)
Content version: "Identify the main idea and three supporting details in this passage."
Competency version: "Your school librarian wants to add five new books to the third-grade shelf. Read the three book reviews provided. Choose which book you would recommend, write a short letter to the librarian explaining your choice, and use at least two reasons drawn from the reviews."
Why it works: authentic context (recommending to a real audience), combines reading comprehension with writing and persuasion, multiple valid choices (any of the three books could be defended), and the letter forces justification. Same reading skills as the content version, but the student has to do something with what they read.
Elementary, Math (Grade 5)
Content version: "Calculate the perimeter and area of these five rectangles."
Competency version: "Your class has $200 to build a small vegetable garden in a rectangular plot behind the school. Fencing costs $4 per foot. Soil costs $2 per square foot. Design a garden, calculate your costs, and explain why your design is the best use of the budget."
Why it works: real-world constraint, combines area, perimeter, multiplication, and budgeting, multiple workable designs, and the student has to defend trade-offs (a long thin garden has more fencing but might fit better, a square uses less fencing but more soil).
Middle School, Science (Grade 7-8)
Content version: "List three factors that affect plant growth."
Competency version: "A local community center has noticed their indoor plants are dying. They have asked your class to investigate. Design an experiment that would identify the most likely cause. Describe your variables, your method, what data you would collect, and how you would interpret the results."
Why it works: authentic stakeholder, requires understanding of biology and the scientific method, multiple valid experimental designs, and students must justify why their design isolates the variable in question. This task is aligned with NGSS practices, the UK working scientifically strand, and the Australian Curriculum's science inquiry skills, all in one go.
High School, History or Social Studies (Grade 10-11)
Content version: "Name the major causes of World War I."
Competency version: "A history podcast wants to do an episode titled 'The Most Underrated Cause of World War I.' Choose one cause, write a five-minute script that argues for its importance, and use at least three pieces of historical evidence to support your case. Address one likely counter-argument."
Why it works: authentic format (podcast scripts are real things students consume), combines historical knowledge with argumentation and source use, multiple defensible choices (alliances, nationalism, imperialism, the assassination), and the counter-argument requirement pushes students into genuine analysis instead of a list dressed up as an essay.
Rubrics: the natural ally
The most common objection to competency tasks is grading. If two students reach different conclusions, how do you grade fairly? The answer is rubrics, and once you have built a few, they become the most useful tool in your assessment kit.
A competency rubric is not a checklist of right answers. It describes levels of performance across the qualities you actually care about. For most tasks, three to five criteria are enough. Common ones include the quality of evidence used, the strength of reasoning, the clarity of communication, and the accuracy of any factual content embedded in the work.
A simple four-level scale works well for most classrooms: emerging, developing, proficient, and advanced. For each criterion, you write one or two sentences describing what each level looks like. The student who used two relevant pieces of evidence and connected them clearly is proficient on the evidence criterion. The student who used four pieces and explained how they interact across time periods is advanced.
Two practical tips. First, share the rubric with students before they start the task. This is not making it easier, it is making the target visible. Second, when you grade, mark the rubric first and write the comment second. The rubric tells you and the student where the strengths and gaps are. The comment turns that into next steps.
A well-built rubric also makes your assessment more defensible to parents, administrators, and the students themselves. Instead of a single grade that feels arbitrary, you have a transparent description of what the work showed and what it did not.

Is it worth the extra time?
Let us be honest. A competency task takes longer to design than a multiple-choice quiz, and it takes longer to grade. There is no version of this where you save time on the front end.
What you gain is information. A competency task tells you what your students can actually do with their knowledge. It tells you whether your teaching transferred or just stuck to the test. It surfaces misconceptions that a true-or-false question would have hidden. And it produces work that students remember, because they had to think.
You do not need to replace every assessment overnight. A reasonable starting point is one competency task per unit, alongside the quizzes and checks you already use. Over a year, that is six to ten tasks where you find out what your students can really do. Pair it with a clean rubric and shared exemplars from previous students, and the grading load becomes manageable.
Competency based assessment is not a different philosophy. It is the same teaching you already do, finishing the sentence. You taught the content. Now you find out what they can do with it.
Draft My Lesson is the AI-powered lesson-planning tool built for English-speaking K-12 teachers. Plan your lessons in minutes and spend more time on what matters. Try it free.