Ensuring quality in a quality audit Assessing quality in program scale-up – our experience from an eastern state in India

Vasudha Chakravarthy and Riza Mahanta

June 2020

While scaling up has always been desired, it has posed challenges. There have been frameworks and guides on scaling up, such as the MSI framework [1] and tools and techniques for practitioners. In India, too, there has been a focus on scaling up, and “implementation pathways [2]”  have been suggested. Nonetheless, scaling up poses challenges. 

A key challenge in scaling up is maintaining the same quality as the pilot. In a pilot, there is significant use of resources and attention. That focus gets diffused when scaled up. Hence, the outcomes from a scaled-up intervention are often not the same as the pilot.  

A year and a half ago, we at Development Solutions, got the opportunity to assess quality in the scale-up of a community-based health intervention, to be implemented by the Accredited Social Health Activists (ASHA’s) of the public health system. A successful pilot intervention model, implemented in multiple contexts, with a focus on vulnerable populations was to be scaled up to an entire state by the state health system. And we were tasked with assessing quality and enabling inputs and programmatic recommendations – over the two-year scale-up period. 

As we began the assignment, and poured through literature, brainstormed, discussed and debated; we decided that given the context, we would focus on two key aspects:

  • Ensuring that we understand and define our quality indicators and data sources correctly  
  • Ensure that our processes and data collection is also of desired standard to help assess quality

This blog is to share some of our dilemmas, challenges, and experience over the last year and a half. 

Understanding and defining quality in implementation – and how should it be measured? 

A community-based intervention, driven by local needs and contexts, brings with it significant variability in implementation (across socio-economic contexts, geogrphies etc.).  Nonetheless, based on the pilot; in the scale-up, two indicators of quality for community processes were finalized. One, that a minimum number of representatives from the target groups, are consistently part of the process. Two, that the community activities follow the processes as outlined in the design. Similarly, indicators were defined for the capacity development processes too. 

Once the indicators were defined, the next task was to determine how they would be measured. Given that we had to measure quality independently, we could not rely on the data being collected by the health system or as reported by the community health workers. We had to independently ‘observe’ the processes to assess training quality, community participation, and conduct of activities.  Given the scale and time duration of the intervnetion, this meant that observations had to be conducted – each month, across 21 districts, over 24 months! 

Which brought us to our next question – how do we ensure consistency across so many observations and how do we ensure data quality? 

Undertaking observations at scale, brought with it the same challenge as in the case of implementation scale-up – variability! How one investigator observes and records processes, may be significantly different from the other. To address this: 

  • We ensured that our observation tool was entirely quantitative – with closed-ended questions 
  • Each of the aspects to be observed was defined and codified 
  • The tool was tested  for three months, with multiple rounds of data collection with the entire team, followed by feedback and training/ re-training to ensure consistency in the way the questions and indicators were understood 
  • Persons from each district were recruited for data collection to ensure that they continue for the 2 year period. Thankfully, the team attrition was minimal. 
  • A ready-reference manual was developed and shared with the team on how to observe and collect the data. 

We also put in place  additional quality check processes to ensure that the data was  robust and accurate: 

  • Each investigator had to photograph the intervention activity both at the start and end, to record participation. This was in addition to the physical counting. 10% of the photographs were cross-checked with the observed numbers 
  • Details of participation were also taken from ASHA for triangulation. The difference, if any, between recorded and observed, was noted. The averages done for 6 quarters did not show any significant difference.  
  • Spot checks by cluster coordinators also helped to ensure quality. If there was a significant difference between the two recordings (investigator and cluster coordinator), the forms were rejected, and another activity was observed instead. 
  • And finally, all forms were checked for completeness and accuracy by the state head and regular back end data quality checks were done by the analysis team

We also felt that the structured observations alone might not provide the needed insights. Hence, qualitative interactions were undertaken every 6 months – to understand aspects of participation and perceived quality of capacity development,  hand-holding support, monitoring, and review. Interactions were undertaken with program implementers, trainers, officials and the community. 

Over six quarters, we have observed over 2000 community-level intervention activities facilitated by the ASHA’s, observed the training cascade, on the job training and triangulated results with data reported by ASHA’s. We have also understood perspectives of officials, implementors and community on the processes, its quality, challenges, and impact. Through our efforts, we enabled inputs on district performance on quality indicators, successes, and challenges in implementation. Our feedback enabled the needed course correction over the scale-up period for sustained implementation and impact! 

This note is to share our experience in doing quality audits. Please do not quote or use this in any manner. If anyone needs additional details, we request you to reach out to us. 

[1] Management System International. 2016. Scaling Up — From Vision to Large-Scale Change A Management Framework for Practitioners Third Edition. See http://msiworldwidewjzvccptpx.devcloud.acquia-sites.com/sites/default/files/additional-resources/2018-11/ScalingUp_3rdEdition.pdf

[2] See UNDP. 2012. Scaling Up Successful Livelihood Pilots Implementation Pathways. Summary of Proceedings. December.

Leave a Reply

Your email address will not be published. Required fields are marked *