Using iOBE to Assess Students' Ability to Solve Problems
Wednesday, July 30, 2025
What Are We Really Measuring in Problem-Solving?
In engineering education — and in many other disciplines — we often highlight “problem-solving ability” as a core learning outcome. It's written in course syllabi, emphasized in program objectives, and cited as a critical skill for future professionals. But when it comes to actual assessment, there’s often a disconnect between this aspiration and the methods we use to evaluate students.
Most assessments tend to focus narrowly on whether the student arrived at the correct answer, sometimes with partial credit awarded for showing steps. While this approach may capture the outcome of a student’s work, it often fails to reveal how the student got there — what they understood about the problem, how they planned their approach, and what choices they made during the solution process.
In other words, we claim to value problem-solving as a skill, but we rarely assess the cognitive process behind it. What remains invisible is the thinking journey — the way a student interprets the question, builds a mental model, navigates uncertainties, and reflects on the plausibility of their answer. This gap matters, because it’s the process — not just the final result — that defines whether a student truly knows how to solve problems.
A Structured Way to Solve Problems
If we want to assess students' problem-solving ability more meaningfully, then we need to define it more carefully—not just as a destination, but as a process. Over the past decade, I’ve been developing and refining a structured method I call the 3-Step Approach, designed to help students tackle problems in a more intentional and organized way.
The three core steps are:
- Understand
- Strategize
- Solve
A fourth step, Evaluate, is often included when appropriate to encourage students to reflect on the reasonableness, limitations, or broader implications of their solution. However, I intentionally don't include it as part of the 3-Step process, since many test or exam questions tend to involve smaller-scale problems that do not require deeper evaluation beyond arriving at a solution.
Each of these steps reflects a distinct stage of cognition. Rather than diving straight into equations, students are encouraged to begin by grasping the essence of the problem — through sketching, interpreting what is known and unknown, and identifying what’s being asked. Once that foundation is clear, they proceed to plan their approach before executing it. Finally, they revisit their solution with a critical lens.
This structure not only mirrors how expert problem-solvers actually work, but it also gives students a practical, repeatable framework they can carry into different contexts — be it academic, professional, or real-world.
To support students more concretely, I’ve expanded this structured approach into a 3x3 Framework, where each major step is broken down into three sub-steps. This helps guide learners more precisely through the thinking process. You can explore this framework — and the story behind its development — in these two Medium articles:
By introducing this structure in class, I found that many students who previously struggled with vague, intuitive approaches began to approach problems more deliberately. The framework gave them something to hold onto — a map of what it means to "solve" rather than just “answer.”
But even with this structure in place, one important question remained: How do we assess these steps in a way that is consistent, meaningful, and actionable — for both lecturers and students?
Assessing These Steps with iOBE
Having a structured method to solve problems is only part of the solution. The next challenge is how to assess students along that structure — consistently, meaningfully, and at scale.
Over the years, I’ve developed a set of Master Rubrics that allow me to evaluate student performance across the key problem-solving steps: Understanding, Strategizing, Solving, and, where applicable, Evaluating. These rubrics are designed to be flexible enough to apply across various question types and difficulty levels, while still preserving the logic of the 3-Step Approach. They are also applicable across other kinds of assessment types, including project presentations, technical reports, and essays.
Once students’ marks are assigned across these stages, the next step is to make sense of the data. This is where the iOBE software becomes essential. It allows me to convert these staggered evaluations into clear visual patterns that reveal how the process unfolds — not just for individual students, but for the class as a whole.
Here’s how it works:
- First, the marks are entered into a pre-formatted Excel sheet. If the data is already available, this step usually takes less than five minutes.
- Then, with a single click, iOBE extracts the data and processes it within seconds — even for an entire course. (Users interact with the software through its simple GUI panel, as shown in the figure below.)
- The software then automatically generates two key visualizations (in addition to other data): Box Plots and Population Charts.
The Box Plots display statistical summaries for each stage — minimum, maximum, median, and quartile ranges — allowing us to see how tightly or widely student performance is spread.
The Population Charts group students by performance bands, using intuitive color coding (e.g., green for high performers, red for those struggling). These charts make it immediately clear how many students fall within each outcome range, and where the bulk of the class stands.
A Real Example from My Course
To illustrate, here’s an example from one of my courses:
To produce the graphs above, I compiled student marks from several problem-solving questions across two separate tests. Each question was evaluated using the same Master Rubrics on the 3+1 steps for solving problems. After importing the data into iOBE, the software aggregated the results and produced visual outputs that revealed more than raw marks ever could.
Each of the two charts in the figure above includes five columns. These columns, from left to right, are:
- Overall marks
- Understanding
- Strategizing
- Solving
- Evaluating
This layout mirrors the structure of the 3-Step Approach (plus Evaluation), allowing a direct comparison between students’ performance across each phase of the problem-solving process.
What emerged from these charts was a pattern I’ve seen repeatedly over the years — and one that often goes unnoticed without tools like iOBE. Performance tends to start relatively strong in Understanding, then declines noticeably in Strategizing, and continues downward through Solving and Evaluating. The final step is often the most overlooked, even though it plays a critical role in developing reflective thinkers.
These insights don’t just confirm what we suspect — they give us a clear, visual map of student cognition, which we can respond to through our teaching.
What the Data Reveals
When viewed together, the data from the class example revealed a consistent and telling pattern — one that aligns with my experiences across multiple cohorts over the years.
As we trace student performance from Understanding to Evaluating, there is a noticeable and progressive decline. The box plots and population charts reflect this clearly: students start reasonably well with identifying and interpreting the problem, but their performance drops as they move into planning, executing, and evaluating their solutions.
This is more than just a grading trend — it reflects how students engage with the cognitive demands of problem-solving.
Most students begin with a fair grasp of the problem. Their Understanding step is often adequate, although not always complete. They may sketch something, list what is known, and state the target variable. However, this step also includes making appropriate assumptions to model the physical situation effectively — a sub-skill that many students overlook or apply loosely. This oversight weakens the foundation upon which the rest of the solution process depends.
The next phase—Strategizing—is where difficulties become more apparent. Some students either engage with it only superficially or even skip this step entirely. Rather than selecting a coherent path toward the solution, they often default to recalling formulas from familiar examples without confirming if those are actually relevant. Their plan, if present, lacks structure or justification.
This weak planning carries over into the Solving phase. Execution becomes error-prone when students operate without a clear roadmap. Even when the correct equations are known, their use may be inconsistent or fragmented. Students may substitute values prematurely, mismanage units, or misapply boundary conditions. One critical part of this step — often neglected — is the internal verification of results. This includes checking whether the final answer has appropriate units, whether it is within a plausible range, and whether the outcome aligns with expectations. These checks belong to the end of Step 3, and when omitted, they increase the likelihood of unnoticed errors.
The Evaluation step, which comes after solving, involves a broader kind of reflection. It asks students to consider the meaning, implications, or limitations of their results. In real-world contexts, this might involve asking: What does this result tell us? Can it inform a design decision? Would the solution still hold under different conditions? Yet in classroom settings, this step is rarely emphasized. As a result, most students treat their answer as an endpoint rather than a springboard for further thinking.
These patterns — made visible through iOBE’s visualizations — offer more than just confirmation of what we suspect. They provide a structured, data-driven diagnosis of where students are struggling. Rather than guessing whether a low mark was due to weak content knowledge or careless error, we can pinpoint the stage of cognitive breakdown. And this opens up opportunities to respond intentionally — through targeted instruction, guided practice, and more formative feedback.
Sharing Outcomes Data with Students
One of the most immediate benefits of using iOBE is how easily and clearly feedback (in the form of these outcomes data) can be shared with students — often right after a test, assignment, or project evaluation has been completed.
After entering the scores and running the software, I typically show students the aggregated visual results for the entire class. These graphs — always anonymized — help students see where the cohort as a whole performed well, and where they struggled. It also allows each student to reflect on where they stand in relation to others, not just in overall marks, but in the specific cognitive phases of problem-solving: Understanding, Strategizing, Solving, and Evaluating.
This kind of class-level transparency fosters a shared awareness of learning — not as competition, but as a collective journey through complex thinking tasks. It allows students to normalize struggle in certain areas, and to identify stages they personally need to improve.
Importantly, this approach to feedback is not limited to tests. For project work or coursework, the same visual structure can be used, but the plotted metrics (i.e., the horizontal axis) can be adjusted. Instead of using the 3+1 steps, the x-axes might represent course learning outcomes, Bloom’s taxonomy levels (or other taxonomies), or even specific topic categories, as illustrated in the graphs below. iOBE is flexible in this sense — it supports a wide range of learning goals while preserving a common format of feedback.
What students receive from this visual dataset is more than a percentage grade. They see their performance in context, broken down by stages of thinking or categories of competence. They gain clarity — not just about what went wrong, but about what kind of thinking needs to improve. A student may realize they skipped the planning phase, applied content without reflection, or didn't evaluate the relevance of their result. These are subtle gaps that conventional grading rarely makes visible.
Over time, this kind of feedback helps nurture self-awareness and more self-regulated learning. Students begin to engage not just with whether they scored well, but with how they approached a problem—and what they could do differently next time.
Feedback from Students
I've received countless feedback from students, over the years, on their experience learning and using this 3-Step Approach to solve problems. Many of them said good things about this approach (normally after nearing the end of the semesters). But, quite many of them too expressed difficulties on getting familiar and embodying this approach into their problem-solving routine.
One recent comment that I received was quite direct and honest on recalling his inital impression on having to go through this unlearning and relearning process with me:
At the beginning, I thought that this kind of teaching method was just wasting time and making it harder for us to obtain a good grade for this course. In this digital world, we can obtain an answer in a quick and easy way. Why do we need to think so much about a simple question? But after some lessons, I realized that this kind of teaching method not only trained us to think critically and systematically but also helped us appreciate the real application...
...it provided a structural [method] for me to answer an engineering question. In the past, I often rushed into calculations without fully grasping the problem’s intent. This approach slowed me down in a good way, which forced me to pause, sketch diagrams, make assumptions, and clearly define what I knew and what I needed.
Another student wrote below, reflecting on meaningful - and potentially lifelong - changes on her approach to learning:
Before taking this course, I tended to focus mainly on learning how to perform calculations to answer exam questions. I would memorise the steps and formulas required, often without fully understanding the theory behind the problem. The 3-Step Approach, however, forces me to really understand the theoretical concepts before doing any calculations. As a result, I have found myself learning more meaningfully, and I no longer rely solely on memorisation. I believe this change in myself will be beneficial for my future, regardless of the career path I pursue as the ability to understand a problem thoroughly before attempting to solve it is a fundamental skill in any job.
During the individual test, I really experienced how writing the “Understanding” part helped clarify the concepts in the question. By the time I reached the “Solution” part, I found that it was much easier than expected because I had fully understood the problem...
Why This Matters
In most classrooms, grades reduce a student’s performance into a single number — an average that hides the complexity of how they arrived at that outcome. While useful for summarizing achievement, such scores rarely tell us why a student succeeded or struggled.
What iOBE offers is not just finer resolution in marking — it offers a way to see the architecture of learning as it happens. It reveals trends across stages of thinking, uncovers recurring bottlenecks, and enables us to ask better questions about how students think — not just what they got right.
This kind of insight is valuable for instructors, but just as powerful for students. When learning is reduced to scores, students often assume their only path forward is to “try harder.” But when they see where their thinking faltered — whether in interpretation, planning, execution, or reflection — they gain specific, actionable guidance. They shift from blind repetition to informed revision.
Seen this way, assessment becomes more than a tool for judgment. It becomes a tool for awareness, intervention, and ultimately, growth. And tests are no longer endpoints—but mirrors for deeper learning.
Final Thoughts
Problem-solving is often treated as a black box. We pose a question, wait for an answer, and assign a grade. But what happens between the posing and the solving is where the real work lies — and where the true nature of learning resides.
When we begin to look inside that process — to see how students understand, plan, and reason — we start teaching differently. And when students see their own thinking with clarity, they start learning differently, too.
The iOBE software doesn’t impose a new pedagogy — it makes existing thinking visible. And in doing so, it supports a shift: from evaluating work to understanding minds, from checking answers to cultivating insight.
In the long run, helping students see how they think may be just as important as teaching them what to think. Because when they leave our classrooms, the problems will keep coming. What we give them, in the end, is not just content — but a way of engaging with complexity that lasts far beyond the exam.