Overview
Prior to introducing a new edtech tool or platform in their district, many technology departments decide to engage in a pilot – a trial period during which educators and/or students can test out an edtech product and assess its usability, alignment to classroom and district priorities, and impact on teaching and learning. A pilot program should have measurable goals, which can help the technology department assess whether it makes sense to invest in the tool further, as well as potentially scale the tool to a wider audience in the district.
Selecting an edtech tool for a pilot should be a thoughtful and equitable process. Rather than the technology team simply researching and choosing the tools they want to test, empowering teachers to determine which tools may best serve their students’ needs can be a powerful approach to ensuring more voices are included in the process. Teachers deeply understand the unique needs of their classrooms, students, and students’ families, and can offer insight into edtech solutions that may have a larger impact on teaching and learning.
Mendon-Upton Regional School District (MURSD), a small suburban district in Massachusetts, has had a 1:1 device program since 2013 and currently offers a portfolio of over 200 edtech tools to their teaching staff. In an effort to streamline their tool offerings and focus on the most impactful solutions, the technology team created a pilot process in which teachers can nominate a tool they would like to test, explain why they believe the tool will enhance their classroom, and subsequently run a pilot for a defined amount of time. By including teachers’ voices in the selection process, the MURSD technology team hopes to identify the tools that are actually supporting student learning and thoughtfully invest in effective solutions.
As part of the MA EdTech Peer Learning Cohort, MURSD created a four-step approach to empower teachers to pilot new edtech tools in the classroom:
Nomination: Teachers nominate a tool they would like to test in their classroom. They can submit a Digital Tool Pilot Proposal Form, which asks for details such as what they will be using the tool for, how often they plan on using it, which other colleagues will be participating, and possible anticipated challenges. Teachers can submit the nomination form during three windows in the school year – September, January, and March – and funding will be provided if accepted, subject to availability (e.g., March might be not offered if pilot funds are exhausted by the September and January rounds).
Review: The technology team reviews the proposals submitted by teachers and scores them on the Digital Tool Pilot Evaluation Rubric. The rubric begins with non-negotiables (e.g., tool is within budget, aligns with the district’s priorities, meets privacy requirements including FERPA and COPPA). If the proposed tool satisfies them all, the reviewers move onto the “teacher criteria” section, which covers the information from the teacher-submitted proposal form. The data is scored on a numerical scale, and if the tool meets the minimum threshold for an effective pilot (high perceived impact on learning, low perceived number of challenges, and strong implementation plan), the reviewers move onto “technology criteria.” These criteria include interoperability, single sign-on, data visibility, and additional technical requirements. Once all of these factors are considered, the technology team determines whether or not to recommend the tool for a pilot program. Teachers can begin their pilot once their proposal is accepted. This comprehensive process ensures that the priorities of the district, teachers, and technology department are all equally considered, the tool is financially feasible to pilot, and the pilot is set up for success through thoughtful review of the conditions and logistics of the tool’s implementation.
Evaluation: All pilot teachers participate in an annual Digital Tool Evaluation process, which takes place in May of each school year. During this time, the technology team requests feedback from educators and students about their experience with specific tools, including newly piloted tools as well as more “mature” products that the district wants to evaluate their investment in. Educators and students self-report their experience using the district’s evaluation form for each tool.
Future Planning: If the pilot tool scores well in the evaluation process, the technology team will renew the pilot for the following school year and plan to scale the tool’s usage and adoption across the school or district. The tool will also be included in future budgets. If the pilot tool does not score well, the technology team may choose to abandon the product or consider if they could redesign the pilot and iterate. If a “mature” tool does not score well, the tool will come under a secondary review during the budget process that fall or winter. The results will be released and the school community will be notified as to whether or not it will be included in the following year’s budget, or if the current funding will be reallocated for investments that better fit student and teacher needs.
An effective pilot program takes into consideration the connection between a tool’s offerings and the needs of the students and teachers, as well as the alignment to a district’s instructional priorities. By empowering educators to have voice and agency in the edtech decision-making process, districts can have a better understanding of the student experience, ensure tools are equitable and accessible to all communities, and possibly garner better buy-in around a platform’s implementation through this collective process.
More detailed information on designing and conducting edtech pilots, as well as how to determine next steps, can be found in the MA DESE EdTech Systems Guide, which also offers a supplementary workbook, a throughline example, and external resources.
Strategy Resources
Digital Tool Pilot Evaluation Rubric from Mendon-Upton Regional School District
The technology team at Mendon-Upton Regional School District (MURSD) designed this evaluation rubric to score... Learn More
Running an Effective Pilot Program
This document provides a step-by-step approach to designing and executing an edtech pilot program, reflecting... Learn More
Equity Focus
As much as possible, it is important to include specific, targeted populations in pilot programs to ensure the tool can meet the needs of all students, families, and the teachers who serve them. These populations may include:
Students learning English, their families, and the teachers who serve them. By including this community, pilot teams will be required to proactively become familiar with edtech tools’ linguistic accessibilities and translation service features.
Students with diverse learning needs, their families, and the teachers who serve them. By including this community, pilot teams will need to plan how to provide the necessary scaffolds and accommodations that students need to succeed using the tool.
Ensuring the involvement of specific populations in pilot programs can help prevent technology teams from purchasing and rolling out tools that create additional barriers to access or require additional scaffolds, retrofits, or workarounds for implementation.
When analyzing the pilot results, it is also important to note any variance in user experience that aligns with different populations in the school community. For example, if the overall feedback of a piloted tool is positive, technology teams may be tempted to proceed with renewing or expanding the pilot. However, if students of color, students from groups positioned furthest from economic opportunity, or students with disabilities have outcomes or feedback that is less positive, technology teams should seek to understand the nature of these challenges and incorporate their learnings into the decision-making process.