
Above: Recent Akvo FLOW and M&E training workshops in Bolivia, Guatemala and Honduras. Photos by Ethel Méndez Castillo.
I joined Akvo a little over four months ago and when I tell people that I, a self-proclaimed monitoring and evaluation (M&E) specialist, work for an organization that develops open source software, I usually get a blank stare. M&E is a central component of the new Akvo USA Foundation strategy so an M&E specialist joining the team is no accident, it is strategic. But, why?
First, what is M&E?
M&E has become kind of a coined term to describe the process by which organizations generate evidence about their results. It has been around for some time but has become increasingly popular over the last decade because of the need to measure the Millennium Development Goals (MDGs) and thanks to a few high level international fora on aid effectiveness.
Organizations focused on strengthening their M&E activities, but soon realized that monitoring and evaluation activities cannot be separated from project planning. Monitoring is conducted based on indicators set out in planning documents and therefore if the project plan is wrong, it is likely that we will measure the wrong things. Among the most common mistakes in planning is that organizations don’t take the time to understand the problem well enough before they design a solution, or they focus on output-based indicators instead of outcomes or impacts. That is, they focus on ticking the boxes for activities they are responsible for doing, like delivering things. But they may not measure what people do with the things they receive, or the effects the things have on their lives, which are usually the outcomes we would like to measure.
Take, for example, an economic development project where the implementing agency reports the number of, say, beehives and the number of workshops on beehive management they deliver to project participants but not the changes in income that the participants may experience due to selling honey and other derived products. That’s focusing on activities and not on outcomes. Could it be that the beehives are just giving people more work without any additional benefits?
Evaluation is often lumped together with monitoring and thought to be the sum of monitoring data over time. It is more than that. Evaluation is about collecting data (which may or not be related to monitoring data) systematically, to answer questions and make judgments about a project. It can – and if you ask me, should – take a broader look and challenge the assumptions that underpin the project theory of change (a diagram and statement that explains why and how we think an intervention will work). But evaluation is less common than monitoring and that broader perspective is sometimes ignored. Overall, effective planning is critical to good M&E.
In addition to planning, the use of data is also very important; for what is the point of collecting data and transforming it into information if it is never used? Therefore, M&E has an implicit goal of making sure the evidence it generates triggers learning, changes in behaviors and decisions that benefit the disadvantaged groups that the project aims to benefit.
All of this is to say that M&E is more than simply “monitoring” and “evaluation.” It is about project planning, defining results frameworks and indicators, choosing the best data collection methods and approaches, collecting, cleaning, analyzing, visualizing and sharing data and, perhaps most important, making sure results are used. Below are some examples of how we are working to integrate Akvo tools and M&E.
It is worth noting that the concepts described above are helping us write Akvo’s approach to generating evidence – a document that outlines what we think is important in the process of M&E and which can inform what we do moving forward. Perhaps we’ll make the concepts that are part of the term “M&E” explicit. Maybe we’ll develop a better name for it.
Akvo for M&E
The first time I collected data on a mobile device felt like I’d found the Holy Grail! Folks working on M&E spend a lot of time collecting, interpreting and reporting data, which matches Akvo’s description of what our tools do; capture, understand, and share. I think the link is clear on that front.
However, our tools aren’t meant to help users simply collect any type of data. We want it to be good quality, relevant data that can inform decision-making. Training to that end goes beyond someone knowing how to select a question type. It’s about people understanding how projects are planned and the different types of indicators they can and should monitor. So, for instance, if organizations are collecting data only on the things they deliver (outputs), they are missing a great opportunity to document and learn about the effects that those things they deliver are having on the population. Remember the bees above?
Because of this need to strengthen capacities along the M&E spectrum in order to generate the type of data we would like to see produced, Akvo Foundation USA has integrated advisory services on M&E into our strategy. Over the last two months, I have been working on our first case of such services with Helvetas Guatemala. Along with their team, we are reviewing two of their project plans, their theory of change and results framework and designing surveys to collect data using Akvo FLOW. They are the thematic experts; I facilitate the process, aiming to strengthen their capacity to plan, monitor and evaluate. Together, we make the best use of FLOW and generate better data.
M&E for Akvo
Over the last few months I have also tried to infuse the development of our tools with use cases from M&E. For example, monitoring indicators are sometimes expressed as percentages because that provides us with a frame of reference for analysis and can make results comparable. Take the percentage of people who receive an HIV test in a year, a common indicator for work on HIV/AIDS. Data collected with FLOW may show that 5,000 people were tested but, in itself, the number 5,000 does not provide useful programmatic information about coverage, for it may be 5,000 people out of a population of 100,000 (5% of the population tested) or out of 1 million people (0.5% of the population tested). Thus a percentage figure is needed. So, if FLOW is to be used for monitoring, can we incorporate more analytics to be able to generate percentages? Scores? The new iteration of RSR allows users to monitor indicators. They can consolidate the same indicator from multiple projects or countries into one program level-indicator. If this HIV indicator (a percentage, remember) were an indicator within an RSR project, how could we aggregate the results from multiple projects or countries into a single percentage? (Adding percentages won’t do the trick.)
In a different example, improving effectiveness and transparency involves people with different roles in the project, including community members, having (ideally timely) access to the information produced. How can our tools help our partners return information to the people and communities that provide data, in formats they understand, so they can make decisions to improve the project and/or hold others accountable? How do our tools help us close the M&E feedback loop? As the Humanitarian Technologies Project indicates, “The effects of not closing the feedback loop are potentially very harmful as they can lead to further silencing and demoralization of affected people.”
I have loved the past four months and I am excited about our work ahead. It has been great collaborating with different members of our team to put some of these issues on the agenda. There is plenty of room to continue improving our tools based on our partners’ work and the M&E cases we are developing. We have also yet to explore how to measure our contributions to creating more accountability, effectiveness and learning. Perhaps we can eventually even innovate in other areas like planning and learning…how about an online platform of M&E best practices?
So, going back to the blank stares
We develop software, yes, but our goal is to generate greater accountability, effectiveness, and learning. It makes sense to work with partners to ensure their projects are well planned and that their M&E frameworks and data collection protocols are set up to generate – with our tools – high quality, relevant data that will be used to improve the situation of disadvantaged groups. That gets us, Akvo, closer to our goal.
Ethel Méndez Castillo is a monitoring and evaluation specialist, with a particular focus on the US and Latin America. Follow her @ethelnmc.