A week ago, the LessonCast team attended an edtech event hosted by New Schools Venture Fund and Imagine K12. Eric Ries shared his lean startup methodologies and how they relate to education startups. Among the gathering of education companies, new and veteran, Eric Ries spoke interview-style with Jennifer Carolan to share his lean startup methodologies and discuss how they relate to education startups. The prevailing theme seemed to be that every sector (including education) thinks that it’s special, and every sector (including education) isn’t. Lean startup principles work the same for edtech startups as they do for other companies.
I have noticed, however, that the conversation about how to apply lean thinking to K-12 innovation is just getting started. Recently, Jennie Dougherty in her Beta Classroom blog wrote a letter explaining the need to develop lean methods for researching the efficacy of education innovations. From my experience, the core principles remain the same, but what does a case example in the K-12 setting look like? How can lean methodology be applied to help schools create/find innovations and measure the impact on student learning? And how can this process move as fast as the innovation itself? Given the journey that I’ve travelled over the last two years (founding an education startup and applying lean principles to measure impact in a school environment), I figured I’d share the lessons I’ve learned in the K-12 sphere.
I’ve divided the journey (which is still a work in progress) into several segments:
I. Developing an MVP for K-12 Innovation
II. What to Build
III. How to Measure Impact
IV. What We Learned & When to Pivot
V. What I’d Do Differently Knowing What I Know Now
Part 1 – Developing an MVP for K-12 Innovation
The first step in applying Lean Startup Methodology is figuring out the problem that needs to be solved. As an assistant principal in a large middle school, one of my chief responsibilities was to focus on improving student achievement in reading. (Side note: Yes, the State measures improved achievement through student scores on high-stakes tests, but I also value developing within students higher levels of comprehension to promote lifelong learning). Working with the principal, department chair, and teacher leaders, we looked at student data and noticed that our students struggled most with applying general reading process strategies to support comprehension. We also noticed a trend that our scores dipped in the sixth grade and slowly improved in grades seven and eight. After surveying broader data, we realized that this trend was statewide. We developed several hypotheses about the cause, including the thought that most elementary schools spend 90 minutes to 2 hours (or more) on literacy instruction, while our students have a 45-minute language arts/reading class. Our strategy was to help teachers implement reading strategies across the curriculum to provide more time for literacy instruction and authentic reading comprehension experiences.
Developing an MVP was the key step that differentiated our journey from other data-driven improvement initiatives. Eric Ries defines a minimum viable product (MVP) as “that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” It’s essential because it fuels the build-measure-learn loop (more details in upcoming entries). Developing an MVP is not formulaic and actually quite difficult in its normal context of launching a new product. Not one to back down from a challenge, I sought to apply this thinking to launching the implementation of a new instructional idea in a school environment.
First, we identified a handful of reading strategies that would support comprehension across content areas, but specifying the strategies was not enough. We needed to be concrete – and in agreement – about what we expected to happen in the classroom as a result of implementing a specific teaching practice. How else could we consistently measure impact?
I developed an outline for breaking down the essential components of an instructional idea, and this outline developed into the Lessoncast framework. We used the framework to boil down the what, why, and how of each reading strategy. We defined what essential elements needed to be present for effective implementation and what aspects could vary according to student need and teaching style. Focusing on one strategy at a time, we developed a compact, concrete, common understanding of what we were going to do and what we wanted to see in the classroom.
In my mind, I began thinking about the MVP as the minimum viable practice, and each lessoncast served to clarify what elements of an instructional idea needed to be in place to collect the maximum amount of learning about teacher implementation and student progress. At times, the LessonCast framework helped us to realize that what we were calling a “strategy” was in fact too broad to effectively define and measure. We had to break down each strategy into components that were specific, actionable, and measurable.
All of the lessoncasts that were created for our school initiative are available for viewing in the Lessoncast gallery. I’ll include more details on how we used the MVP – minimum viable practice – to cycle through the build-measure-learn loop in my next entry on What to Build.