Would an elderly grandma from a remote village in Sierra Leone know what Sustainable Development Goals (SDGs) are? She probably wouldn’t. And should she? Probably yes, considering that data collection to monitor the progress on SDGs should start on the community level. This rather thought-provoking statement kicked off the discussion around community involvement and SDG data at the International Open Data Conference (IODC) 2016.

Open data and SDGs
Collecting data from the ground level up requires a smart tool that fits the local context. In remote communities, poor or no internet connection is the reality. The fact that about 17% of world’s population is illiterate makes data collection an even more challenging endeavor. Contextual knowledge like that should guide our decisions about how we collect data and feed it back to the community. Sharing all collected data plays a critical empowerment role in closing the communication loop and, ultimately, keeping the grandma from Sierra Leone updated about the progress with SDGs.

Presentation by Aditya Agrawal from Global Partnership for Sustainable Development Data. Source: @OpenDataWatch via Twitter.

Now, let’s go one level higher.


I don’t envy the local governments and their respective Ministries of Statistics, which need to make sure SDG data is delivered in a timely fashion and against reasonable quality standards. It’s quite a demanding job to ensure the continuous collection and maintenance of birth data, not to mention the complexities of SDGs data.

Eric Swanson from Open Data Watch, an NGO focused on development data management and statistical capacity building, gave a comprehensive overview of the needs and challenges that National Statistics Offices often face. The lack of automated processes for dealing with (open) data is a serious technical constraint. Data illiteracy and lack of political will add to the challenge as well.

One of the slides presented by Swanson at the conference (see below), briefly summarises the types of data one needs to collect to make sure we’re on track with the ‘seventeen aspirational global goals’. Commitment to capacity building on data collection and sharing needs to be in place to give us a better chance to reach these goals.

Eric Swanson, co-founder and managing director of Open Data Watch, and Haishan Fu, director of the World Bank’s Development Data Group on stage at IODC 2016. Source: @OpenDataWatch via Twitter.

With the United Nations orchestrating the SDG process, all associated data should stay with local governments. After all, ownership brings about responsibility. In this case, responsibility refers to updating and maintaining the precious datasets that take a lot of effort to put together. Yes, there is life after the SDGs – continuous data collection and monitoring should eventually grow into a universally adopted habit.

Open data and humanitarian response
Imagine for a second working for a local government in an earthquake-prone country. When an earthquake hits, the situation requires having easy access to all the vital information in order to assess the damage and coordinate humanitarian efforts.

Which areas of the country got hit particularly hard? Which medical facilities were spared and can provide medical care to earthquake victims? Which international organisations are providing assistance and where? How many funds have been pledged to disaster relief?

Unfortunately, quite often, all this information sits in different places. Moreover, it’s unstructured and is shared in formats that make it frustratingly difficult to aggregate and analyse.

The good news is that there are a number of ground-up initiatives tackling some of these issues. Take, for instance, the Open Nepal Earthquake Response platform. The team behind this initiative undertook a significant effort to collect the data scattered across different sources and aggregate it in a human and machine-readable format. The platform offers a wealth of information about the amount of funds that organisations committed, pledged and disbursed to the earthquake-hit country. Interestingly, over $3.5 billion has been pledged to the cause, while not more than $600m was eventually disbursed to date. Also, the sheer number of organisations pledging their support – almost a thousand – is quite impressive.

Visualisation
‘Leveraging data visualisation and partnerships for environmental action’ was one of those hot sessions that attracted a lot of interest at the conference (and generated a stream of tweets). It’s no surprise perhaps that maps featured prominently in the discussion – geotagging otherwise ‘homeless’ data can be powerful indeed. Jessica Webb, Civil Society Specialist from Global Forest Watch, presented a platform that provides near real-time information on the state of forests globally.

The Global Forest Watch platform looks striking, to say the least.

open-data-conference-3-850px

Image credit: globalforestwatch.org

It’s not only the timeline that comes in handy, but also the multi-dimensionality of layers to explore, such as ‘tree cover loss’, ‘managed forests’ or ‘biodiversity hotspots’. Playing around with different parameter combinations can yield some interesting analysis. The platform offers an option to download all the datasets in multiple formats, and even contribute your own.

Post conference thoughts and takeaways
Open data seems to be a bit of a roller coaster at the moment – working with it involves a lot of ups and downs. In high spirits after the conference, I came across this blog post, which was a good reality check about the applicability of open data. It’s about Hurricane Matthew, which in early October ravaged through Haiti and, sadly, left a lot of devastation behind.

Can one use the wealth of IATI data to find out which humanitarian efforts are underway in the region? The answer is simple but the facts are somewhat more nuanced. The quality of IATI datasets seems to be still far from perfect, and analysing it requires a lot of specialised knowledge. The answer to the question about Hurricane Mathew is that … there is no straightforward answer. In fact, making sense of open data is far more difficult than making it open.

Data providers seem to lack a common language for dealing with open data. National governments and international organisations respond to the pressure for increased transparency and accountability by opening up swaths of data without thoroughly assessing their impact and applicability. Moreover, data coverage and granularity is not always sufficient for data analysis to be statistically significant. Datasets that are granular enough are not always interoperable, or, in other words, compatible with other datasets. Continuous public scrutiny might push data providers to more closely consider quality and impact of their open data.

Technological advances shape the choices we make about our digital products at Akvo. Open data is the Zeitgeist of our times, but how do we make sure that we don’t overlook the important ‘do not harm’ principle?

open-data-conference-4-850px

“Everything is better when it’s open,” from the European Commision’s stand. Photo by Nadia Gorchakova.


Technology ethics is definitely an elephant in the room here. Can we open up data and rest assured that it won’t be misused? Surely, no one would want the data on water points, for example, to serve interests of dishonest organisations or military groups. Moreover, with the growing digital divide between countries, but also across gender and age groups, we need to rethink how to make open data more inclusive.

The ethics of technology seem to pose more questions than answers for now. We at Akvo take these questions very seriously and encourage open dialogue on how to leverage the benefits of open data in international development.

To finish on a somewhat upbeat note, have a look at the beer opener I picked up from the European Commission’s stand: “Everything is better when it’s open.”

Nadia Gorchakova is a product manager. She is based in Amsterdam. Follower her @NadiaGorchakova.