Trudy Cherasaro
Trudy CherasaroDirector
caitlin.square
Caitlin ScottDirector

For Kevin Fontanazza, a 6th grade math teacher in a small South Carolina school, Teacher as Researcher was initially a bit of a mystery.

“I had never heard about it before and was curious as to what was involved.”

He and the rest of the school’s teachers had been signed up as a cohort for participation in the professional learning workshop, an initiative that rethinks how educators study and improve their craft. Educators participating in Teacher as Researcher are equipped to design and conduct experiments with promising evidence-based strategies in their own classrooms.

Fontanazza was quickly intrigued by the prospect.

“I heard that I would be testing out a teaching strategy, gathering data, [and] assigning an effect size to it. Being a numbers person like I am, that got me interested in seeing what kind of number I could basically see assigned to a certain teaching strategy, and whether or not it would be an effective one moving forward.”

Instructional Improvement Cycles

A circular diagram showing the 4 steps of an Instructional Improvement Cycle. 1-Select an instructional strategy 2- Implement the strategy 3-Collect data on strategy implementation 4-Analyze the data and reflect on the results

In Teacher as Researcher, participants conduct short research cycles called Instructional Improvement Cycles. These 2–4 week long experiments allow educators to quickly gather data on a strategy’s effectiveness and make instructional shifts accordingly, without having to wait for end-of-year assessment data. Participants typically complete two cycles during the program.

Hundreds of instructional possibilities

One aspect of the learning series that impressed Fontanazza from the outset was the number of teaching strategies available to try through the program’s curated Strategy Clearinghouse. This online database contains hundreds of evidence-backed, standards-aligned instructional strategies from reputable sources like the U.S. Department of Education’s What Works Clearinghouse.

“I was amazed at how many there were, and it took me quite a while to actually go through them all and eventually hone in on the one that I wanted to try.”

Fontanazza’s strategy

For his first foray into Instructional Improvement Cycles, Fontanazza decided to test a specific strategy around mathematical problem-solving: providing incorrectly solved problems alongside correctly solved problems to show students what mistakes to avoid.

He saw this as an opportunity to put his perception of the strategy to an empirical test. Sharing error exemplars with students was a strategy he had tried earlier in his career but had since moved on from.

“I had kind of subscribed to the ‘pink elephant’ idea that if I never showed my students common errors, they wouldn’t think to make them. … I wanted to see if my preconception about this strategy was true or not.”

Running the Instructional Improvement Cycle

Fontanazza’s journey into classroom research didn’t happen in isolation. All the 6th-grade math staff at the school decided to coordinate their efforts, testing out the same error-exemplar strategy but across different units and ability levels. Fontanazza said this allowed everyone to “bounce ideas off each other and contribute in whatever ways we could to each other.”

While some teachers applied the approach to various algebra classes, Fontanazza selected an upcoming statistics unit covering topics like mean, median, and standard deviation.

“I knew from teaching it [before] that there are so many different ways that those [math problems] can be incorrectly calculated. So I knew going into that unit that if I was going to use this teaching strategy, it would just flow really nicely.”

Over the next several weeks, Fontanazza integrated incorrectly worked examples into his statistics instruction, purposefully surfacing and discussing common miscalculations. Along the way, he and his colleagues gathered data to measure the strategy’s effectiveness using pre-, mid-, and post-assessments.

An Unusual Finding Reveals the Importance of Reflection

Thanks to South Carolina’s digital Strategy Workshop tool, analyzing the information they collected and calculating an effect size was simple.

“All I had to do was basically just enter the data. I was surprised at the level of organization, and how much was provided to me to guide me in the journey. It really helped make the interpreting of the data much, much easier for me than I had thought.”

The numbers for Fontanazza told an unusual story. His students demonstrated learning gains over the four-week unit, both when he wasn’t using the strategy (during the first half of the unit) and when he was using the strategy (during the second half of the unit). But learning gains were much larger when he wasn’t using the strategy—a surprising 3.44 effect size (based on mean gain scores from the pre- and post-assessment).

Findings like this illustrate the importance of the reflection phase of Teacher as Researcher. Fontanazza turned to his cohort and coaches to help him figure out why he got this result. He and the other participants had been meeting periodically with coaches from Marzano Research, the group who facilitates Teacher as Researcher, for workshops and discussion time throughout the program.

“[The coaches] definitely put all of us at ease as we went into this process … very affirmative with our feedback, and very conversational as well. At no point did I feel any stress or any anxiety about asking questions or sharing my results.”

The cohort collectively determined the unusual effect size was likely due to the students coming into the unit with little or no experience in statistics. The pre-test scores were very low to begin with, making even a little improvement look more drastic in comparison.

“One of the discussions that we had during one of our sessions where we talked about the data was, ‘Do we think that the lack of prior knowledge could have led to such an exceedingly large effect size?’ So one of the things that I really liked about the process was that we were able to collect data and if it seemed exceedingly high or exceedingly low, that we were able to have conversations about them, and ascertain as to what could have led to [that].”

Fontanazza saw signs of positive impact from the error-exemplar strategy, too; just not quite as pronounced. When he was using the strategy in the second half of the unit, his students continued to show growth, equating to a more expected 0.69 effect size with a high degree of statistical confidence.

This spurred Fontanazza to continue using the strategy where he thought it would be most impactful.

“Based on my data, I included some more error analysis tasks in my teaching where most applicable, especially in the geometry units that immediately followed.”

Round Two: A Chance for Refinement

Though some of the cohort participants chose a different strategy to try the second time around, Fontanazza stuck with the same one for his second Instructional Improvement Cycle to refine his research based on initial results.

“I think that getting another shot at running an instructional cycle is helpful “because you have the chance to reflect on how the first cycle went and improve.”

For his second cycle, Fontanazza decided to tweak both the content area and the research design. With two 6th-grade honors-level classes, Fontanazza decided to implement the error analysis strategy with one class (intervention group) while teaching the other class normally (control group).

This time, the effect size favored the intervention group, who outperformed the control group. The difference in growth was modest, which Fontanazza attributed to honors students being less likely to make the same mistake repeatedly.

Boosting teacher agency and leadership

Fontanazza’s two cycles with Teacher as Researcher gave him a lot to reflect on—not just about the error-exemplar strategy itself, but about student thinking, assessment design, and his overall teaching practice. His classroom research yielded genuine professional insights that went far beyond applying a single strategy.

“It really did help open my eyes to different ways of approaching instruction, different strategies, as opposed to just the same three or four that we, as educators, tend to focus in on all the time. And, the data gave me the chance to reflect and think back on my practice as to why the results were what they were, and it really helped me better understand what works for my students, how my students learn, and the best ways that I can support them in the classroom.”

He said he was especially grateful for how the program fostered participants’ agency.

“It was really exciting to have the chance to collect our data on our strategy and see how it was working in our classroom as opposed to being handed strategies and being told, ‘Okay, this is their effect size, so you should consider using them.’ It gave us a real sense of ownership as to what we were doing, which I really did appreciate.”

The initiative is also designed to cultivate educators’ classroom leadership. Looking ahead to his upcoming new role as math department chair, Fontanazza hopes to employ the Teacher as Researcher spirit of scholarly experimentation: “Any additional strategies that I can work out and test, and with feedback that I can bring back to the other teachers, I think would be helpful for sure.”

For current or prospective Teacher as Researcher participants, his advice is not to worry about the program being an additional time commitment. After gaining proficiency with the research protocols, Fontanazza said he felt the additional workload was negligible. The main shift was adopting more of an inquiry mindset and a data-driven outlook.

“I hardly noticed I was doing anything different than my typical teaching.”

And with the number of strategies available to choose from, he pointed out, participants can find one “that can easily be worked into whatever unit you’re coming up with next.”