How we’re strengthening survey data quality
.png)
One characteristic of the impact projects of our clients is their complexity. We want to simplify this by ensuring that the data we collect, often directly from stakeholders or beneficiaries, is as reliable, relevant, and meaningful as possible. The goal of this workshop wasn’t to find instant solutions, but to build shared understanding and define further steps for improving our processes to generate high quality data.
Amazingly facilitated by Jamie from Town.us, equipped with great snacks we went on a 9.5 km hike to the top of Großer Feldberg to create space for honest discussion and reflection on one of the core topics in our work: Data quality in surveys.
Why this workshop?
Our day started with a clear focus: to better understand the root causes of low-quality survey data and identify the most effective ways we can address them. As a team working with thousands of data points every week, collected in diverse settings, in multiple languages, and often under resource constraints in off-grid areas, we know that data quality is never just a technical issue. It’s human, structural, cultural and sometimes even ethical.
So we created a setting that invited honest reflection and open thinking - far away from our usual routines.
What we see as the biggest threats to data quality
Early into the hike, we conducted a live team poll using digital forms. The goal was to pinpoint what we, as practitioners, see as the most pressing threats to survey data quality.
Here’s what stood out:
- Survey Complexity & Length
Surveys are often too long and designed to meet multiple objectives, increasing respondent burden and risking data quality. - Overlapping Indicators
Pressure to satisfy diverse stakeholder needs leads to redundant or overlapping questions, adding to survey length. - Respondent Fatigue
Long, complex surveys reduce respondent engagement and compromise data reliability. - Translation Loss
Translating surveys into multiple languages often results in loss of meaning and comparability. - Enumerator Inconsistencies
Variations in how enumerators conduct surveys lead to inconsistent data across regions or teams.
The clearest finding that emerged: Survey complexity is one of the biggest threats to reliable, consistent data. Long, multi-topic questionnaires often dilute focus, reduce respondent engagement, and introduce risk at the data collection level.
From complexity to clarity: Where we see the most potential
Mid-hike, we ran a prioritisation matrix exercise, where each team member rated various potential improvements based on two factors:
- Impact on data quality
- Ease of implementation
Through structured group reflection:
- Survey simplification emerged as the highest priority. Among all proposed improvements, shorter and more precisely structured questionnaires were considered both the most impactful and the most feasible. This reflects a clear need to reduce cognitive and operational burdens on both respondents and enumerators — particularly in multilingual and resource-constrained settings.
- AI-assisted data collection, including digital survey tools, automated anomaly detection, and AI-supported translation, was also viewed as highly promising. We agreed that these tools can significantly enhance efficiency and consistency, provided they are applied in conjunction with clear survey design and contextual understanding.
- Increased automation, while technically feasible and offering immediate efficiency gains, was seen as less impactful in the short term. Its full potential for improving data quality still needs to be demonstrated through further use and evaluation.
Overall, the consensus was that AI and automation can strengthen impact measurement systems, but their effectiveness depends on being embedded within a robust, well-designed survey framework.
Why we walked - rethinking data together
One clear insight from our workshop was this: Data quality isn’t just a technical challenge. It’s a negotiation challenge. In our work, we often collaborate with multiple stakeholders who each want to measure different outcomes within a single study. That ambition, while valuable, can result in overly long surveys, fragmented logic, and confusing questions.
But instead of seeing this complexity as a frustration, we’re choosing to see it as an opportunity: An opportunity to communicate more clearly about trade-offs, build trust earlier in the process, and co-create data journeys with greater transparency and shared ownership, both with our clients and within our team.
What made this workshop different wasn’t just the topic. Off course, it was the setting. Spending a day outdoors, walking side by side through nature, gave us space to slow down, reflect honestly, and think creatively. Some of the most valuable conversations happened outside the structured sessions, not across a table, but along the trail. We challenged assumptions, shared laughs, and came away not only with clearer priorities, but with a stronger sense of direction and connection as a team.
We believe in thinking rigorously, but also in thinking differently. For us this is the fundament of sustainable innovation. This day reminded us that building impact also means investing in how we work together - with curiosity, openness, and joy.

How we are moving forward
Better surveys mean better data. And better data leads to better insights, for our clients and especially for the people, communities, and ecosystems they aim to serve.
Our next steps are practical yet iterative. We’re piloting focused improvements while staying open to feedback from the field. Based on team alignment during the workshop, we’ve prioritised four concrete areas of action:
- A structured decision framework to guide how we define survey scope and manage trade-offs with clients early on
- A revised, modular survey structure, co-designed with academic input to improve consistency and reduce complexity
- A machine learning model to flag low-quality data during collection, not just after
- A hybrid AI-human translation workflow to reduce language inconsistencies and ensure clarity across contexts
These steps are part of a broader commitment: Treating impact data quality not as a one-time fix, but as an ongoing practice. We’ll continue refining our tools, listening closely to our team and partners, and sharing what we learn: The breakthroughs, the bottlenecks, and the lessons in between.
Thanks again to Jamie for an amazing day and workshop facilitation. If you want to combine being in nature with working on relevant organisational topics, you definitely should reach out to him.
Want to know more?
Get in touch with us and and start to measure impact confidently.