A recent Elrha’s HIF & ALNAP’s working paper, Evaluating Humanitarian Innovation, brings perspective to the evaluation process itself, as it is applied to innovation in the humanitarian sector. As inputs, the paper takes 15 case studies of HIF grantees. But instead of a review of the cases, author Alice Obrecht (Terms Of Reference guest on Episode 102) focuses on the role of evaluation as an innovation identifier, promoter and sharpener.
Innovation is a turbulent process, and most of the approaches to evaluate it attempt to find the certainty that supposedly hides at the center of the whirlwind. But what if the soul of a revolutionary new approach is circling around us and we’re just not looking for it?
This concern is the starting point of Obrecht’s paper. The outcomes are often the processes, she argues, and the magic of capacity building and “iterative learning” goes unnoticed by evaluators.
Throughout the review, Obrecht sheds light on how this growing concern of finding the right way to evaluate humanitarian innovation is in fact an opportunity. Next to product and process innovation, for instance, she puts “paradigm innovation”, referring to new “underlying mental models“. Another proposition is to measure the effects of a humanitarian intervention against a “complexity baseline”. This frame could give a different understanding to an intervention’s outcomes within a spatial and temporal context.
An overarching theme of Obrecht’s work has to do with reconsidering “binary” frameworks. While interventions must be judged by the impact it was designed for, we should not dismiss new practices or insight, even if they belong to a different development category than that of the original intervention.