We are proud to be supporting the delivery of an online course in crisis mapping in the Caribbean this month. The training course will involve around 50 disaster management professionals from across the Caribbean and is in partnership with the Caribbean Disaster Emergency Management Agency (CDEMA), with input from the Humanitarian OpenStreetMap Team (HOT).
This is the second time the online course has been run, and we used our extensive deployment and training experience in the design and development of it.
As well as learning the principles of disaster mapping for response and resilience and OpenStreetMap data entry, the participants are practicing effective use of the GeoCRIS. This is the new regional repository for geospatial data needed for risk and hazard mapping, disaster preparedness and response operations. As MapAction was involved in setting up the GeoCRIS, we are well placed to design and deliver this training.
The month-long course also includes a disaster simulation exercise in which the students will have the opportunity to deploy their newly acquired skills in a realistic emergency scenario. Additionally, two members of MapAction’s Caribbean volunteer team who have both recently been involved in the response to the La Soufrière volcanic eruption will be running a live session to share their real-world experiences of emergency mapping in the region and answer questions.
We’re grateful to USAID’s Bureau for Humanitarian Assistance (BHA) for funding this activity as part of our ongoing programme to improve the ways in which geographical information systems (GIS), mapping and spatial analysis are used in humanitarian emergencies.
The Heidelberg Institute for Geoinformation Technology (HeiGIT) and MapAction have signed a Memorandum of Understanding (MoU) outlining their plans to collaborate in a number of areas.
Both organisations share a common vision to support humanitarian mapping by providing innovative geoinformation services for humanitarian response, mitigating risk, anticipatory action and economic development. This includes developing up-to-date disaster maps based on OpenStreetMap (OSM).
In order to fulfil these objectives, HeiGIT and MapAction will work together on various activities involving research and development, teaching, outreach and fundraising.
Examples of current and emerging services we plan to jointly develop include OSM analytics such as Humanitarian OSM Stats, which provides detailed statistics about humanitarian mapping activities in OSM, as well as OSM data-quality assessment and improvement. Here, the ohsome quality analyst is of particular interest, which provides end users such as humanitarian organisations with information on the quality of OSM data for a specific region and use-case.
Further tools may include apps for crowdsourcing, data collection, navigation, routing and logistics services and machine-learning-based methods for data processing and enrichment.
We will also share knowledge, with MapAction contributing experiences aligned to HeiGIT’s teaching curriculum, and HeiGIT, in return providing teaching materials and research results to MapAction.
When MapAction triggers an emergency response, the first step is for a team of staff and volunteers to begin what is known as a “data scramble”. This is the process of gathering, organising, checking, and preparing the data required to make the first core maps that emergency response teams will need, which will also be used as the basis for subsequent situational mapping.
Traditionally, the aim was to complete this data collection as quickly as possible, to get as much data as possible that was relevant to the emergency. However, due to the time-sensitive nature of this work, the team is often unable to dissect in detail the different data source options, processes, and decisions involved as they ready the data for ingestion into maps.
What if they weren’t constrained by time during the data scramble? What if they could deconstruct the procedure and examine the data source selection, scrutinise the processing applied to every data type, and explore the ways that these steps could be automated? To answer these questions, the volunteers at MapAction, with support from the German Federal Foreign Office, have been tackling a stepping-stone project leading towards automation, dubbed the “slow data scramble”. We called it this because it is a methodical and meticulous deconstruction of a rapid data scramble as carried out in a sudden-onset emergency.
As part of our Moonshot, MapAction is looking to automate the creation of nine core maps that are needed in every response, freeing up vital time for volunteers during an emergency, and, perhaps more importantly, identifying data issues and gaps well before the onset of an emergency. Towards this end, we have just released version 1.1 of our software MapChef, which takes processed data and uses it to automatically create a map. However, even with MapChef up and running, there is still a large gap in our pipeline: how do you get the data in the first place? How do you make sure it’s in the right state to go into the map? And which data do you actually need?
The volunteer team created and led a project intended to answer precisely the above questions, with the goal of scoping out the pipeline. This would include writing the code for completing the above operations, although not yet packaging things together in a smooth way – that is saved for a future pipeline project.
Selecting the right components
The first step was to determine what data is required to produce the core maps. The volunteers identified a list of 23 ingredients that make up these maps, which we call “data requirements”. These range from administrative boundaries to roads, and from airports to hillshading (a technique for creating relief maps). To complicate matters, each data artefact had multiple possible sources. For example, administrative boundaries could come from the Common Operational Datasets (CODs, distributed by the Humanitarian Data Exchange), the Database of Global Administrative Areas (GADM), or geoBoundaries.
“The scale and extent of data available for just a single country administrative area alone is staggering.”
James Wharfe, MapAction volunteer
Next, the team needed to address how to obtain the data and ready it for further processing. Normally, when volunteers make maps by hand, they go to the website associated with each artefact, manually download it, and tweak it by hand until it is ready to be used in a map. However, with the pipeline this all needs to be automated.
To approach this considerable undertaking, the team divided up the work into small, digestible tasks, meeting fortnightly to discuss progress, answer each other’s questions, and assign new tasks. This work continued diligently for seven months, at the end of which they had a functional and documented set of code snippets capable of automatically downloading and transforming the data required for all artefacts.
There were numerous challenges along the way that the team needed to overcome. Understanding the differences between the various data sources proved a significant hurdle. “The scale and extent of data available for just a single country administrative area alone is staggering,” noted volunteer James Wharfe. (Indeed, this data landscape is so complex that it merits its own post – stay tuned for a blog about administrative boundaries as part of our upcoming “Challenges of…” series.)
One particular data source that seemed to crop up everywhere was OpenStreetMap (OSM). Almost all of the data requirements in the slow data scramble are available from OSM, making it a key data source. However, given the sheer detail and size of the OSM database – 1,37 terabytes as of 1 Feb, 2021(source) – there are several difficulties involved when working with the data.
For the download step, the team decided to invoke the Overpass API, and create a Python method to abstract the complex query language down to some simple YAML files with OSM tag lists. Next, the downloaded data needed to be converted from the OSM-specific PBF format to a shapefile, which is the type of data expected by MapChef. Several solutions for this exist: to name a few, Imposm, PyDriosm, Osmosis, OSM Export Tool, and Esy OSM PBF. For this project, we decided to use GDAL, however, we certainly plan on exploring the other options, and hope to eventually host our own planet file.
Even though the goal of the slow data scramble was not to produce production-quality code, the team still used Git to host their version-controlled code. According to Steve Penson, the volunteer leading the project, “The collaborative and explorative nature of the project meant Git was incredibly useful. With each volunteer tackling significantly different challenges, establishing a strong code control setup made our weekly reviews far easier.”
The team also used the opportunity to extend their Python skills, with a particular focus on GeoPandas, which enables some of the more intricate data transformations that are normally performed by mainstream desktop GIS tools.
Additionally, the group used this work to explore the concept of DAGs, directed acyclical graphs. This term refers to the building blocks of any pipeline: a recipe, or series of steps, that you apply to your data. There are scores of packages available to assist with pipeline development, but to start, the team decided to use a simple workflow management system called Snakemake. Snakemake works by using Makefiles to connect the expected input and output files across multiple pipeline stages. Although, in the end, the team decided it was not the best solution for scaling up to the real pipeline (which is now being developed with Airflow), they agreed that using Snakemake was a great stepping stone to becoming familiar with this key concept.
Finally, before COVID-19 hit, MapAction’s dedicated volunteers were accustomed to meeting in person once a month – a commitment that led to many enjoyable shared moments and close friendships. This positive and much-loved aspect of being a volunteer at MapAction has unfortunately been hindered by the pandemic. Although still conducted fully remotely, the slow data scramble offered the chance to regularly meet, share expertise, motivate and encourage each other, and work together. Volunteer Dominic Greenslade said it well: “MapAction volunteers are amazing people, and the ability to spend so much time getting to further these friendships was a great bonus”.
MapAction has been chosen as one of 202 organisations taking part in Google’s 2021 Summer of Code, a global programme that aims to bring student developers into open source software development. As part of the scheme, which has now entered into its 17th year, students can apply for placement projects from 202 open source entities, with their time paid for by Google. MapAction is one of 31 organisations taking part for the first time.
Since launching, over 16,000 students across 111 countries have taken part by working with an open source organisation on a 10-week programming project during their summer break from studies. Google Summer of Code is open to students who are age 18 and older and enrolled in a post-secondary academic programme in most countries. As MapAction is one of the only companies taking part from the humanitarian charity sector, it’s a great opportunity to highlight the importance of technology advances to our work.
As part of our Moonshot initiative, two students will be helping us with our goal of automating the production of core maps needed in any humanitarian crisis, for 20 priority countries. Being able to automate these maps means essential contextual and reference information about, among other things, the local environment, population and infrastructure, is immediately available when needed in the best possible quality. The students will be working with MapChef, our Python-based map automation tool, and our MVP pipeline framework for automated data acquisition and processing. As our capability grows, we intend to use these systems to identify data gaps at regional levels.
Take a look at our project ideas for the Google Summer of Code. Applications officially open on 29 March and we anticipate a lot of interest.
In March 2020, MapAction and the UN OCHA (Office for the Coordination of Humanitarian Affairs) Centre for Humanitarian Data embarked on a new level of collaboration by sharing a Data Scientist’s time between the two organisations. Both teams had a lot to offer and learn from each other, with MapAction bringing its geospatial expertise to the Centre’s Predictive Analytics (PA) team. Predictive analytics is a form of data science that uses current and historical facts to predict future events. For MapAction, this collaboration also constituted an important aspect of our Moonshot, which sees us transitioning from being a passive data consumer to an organisation that actively contributes to humanitarian datasets.
Looking back on our work over the past year, we can see how this collaboration has benefitted both organisations in many ways, with numerous positive repercussions more widely. Ultimately, the fruits of our joint working are examples of how data science can help to reduce suffering and save lives in humanitarian initiatives.
“Your input leap-frogged us forward. It is amazing to me how quickly we were able to do this together. A round of applause for your work and its contribution to unlocking critically needed aid for Ethiopians.“
Josée Poirier, Predictive Analytics Technical Specialist, Centre for Humanitarian Data
In the latter part of the year, a MapAction team of volunteers helped the Centre’s PA team develop analysis for a drought-related anticipatory action framework which was designed to trigger mitigation activities ahead of a predicted drought crisis. The PA team aimed to better understand the reliability of various indicators used to predict potential food shortages caused by drought in Somalia and Ethiopia. These indicators were then used to trigger an early release of funds from the UN’s Central Emergency Response Fund (CERF). The MapAction team reviewed past literature, evaluated available satellite images, and created a prototype drought model in Google Earth Engine (a platform for visualising and analysing satellite imagery of Earth). These inputs helped the PA team to flag an upcoming crisis in Ethiopia and trigger an activation for a humanitarian response. In the words of Josée Poirier, Predictive Analytics Technical Specialist from the PA team: “Your input leap-frogged us forward. It is amazing to me how quickly we were able to do this together. A round of applause for your work and its contribution to unlocking critically needed aid for Ethiopians.”
The MapAction and PA teams also collaborated to implement and validate an approach for mapping flooding from satellite imagery. MapAction’s Data Scientist has been working with the PA team to help evaluate the impact of recent anticipatory action in Bangladesh which took place in July 2020 and was the fastest-ever allocation of CERF funds. To better understand how this aid was helpful to those affected, the PA team needs to know exactly when, where, and for how long flooding occurred. Contributing to this work also has direct benefits for MapAction’s own work, enabling us to add a new data processing method to our disaster-response toolbox. We then had the opportunity to test this methodology in our response to the devastating impacts of Hurricanes Eta and Iota in Central America.
Both organisations have made commitments to assist in the global pandemic response. The Centre PA team and MapAction Data Scientist have, in partnership with the Johns Hopkins University Applied Physics Laboratory (APL), developed a model to forecast the number of cases, hospitalisations, and deaths due to COVID-19 for six countries, tailored to each country’s specific humanitarian needs. Named OCHA-Bucky, the model offers sub-national projections, and takes into the account the effects of non-pharmaceutical interventions. Presently, MapAction is participating in a pilot project to aid vaccine rollout in vulnerable countries by surveying the current data landscapes and identify gaps in order to address the logistical challenges inherent in such tasks. Along a similar line of work, the Centre PA team and APL are planning on adding vaccination strategies to the OCHA-Bucky model.
There is substantial overlap between the broad technical goals of the two organisations. The Centre’s Humanitarian Data Exchange (HDX) contains over 18,000 datasets and it has created several automatic pipelines (software that carries out a series of data-processing steps) to systematically ingest data from its partners into its database. The Centre’s technical expertise has so far been a key input into the planning and development of a similar (albeit smaller scale) pipeline at MapAction, which is being created to automate the generation of core maps as part of the Moonshot initiative. This work will ensure that base maps essential for coordinating any type of humanitarian response are immediately available whenever they are needed.
The two organisations share similar data access platforms and are actively engaged in ongoing discussions regarding different ways to construct pipeline software. Finally, both HDX and MapAction ultimately seek to identify and rectify gaps in the humanitarian data landscape in order to ensure that those coordinating the preparations for and responses to different types of emergencies have the reliable, timely information they need.
Looking ahead in 2021
MapAction and the Centre for Humanitarian Data are continuing to plan ways to collaborate throughout the rest of the year and beyond.
In addition to sharing expertise in advanced analytics, we are working to make data-driven methods accessible to wider audiences in the humanitarian sector in order to improve the effectiveness of aid programmes. MapAction and the Centre’s Data Literacy team have identified an opportunity to come together to develop GIS training material. This work aims to help non-technical humanitarians make better use of geospatial data to understand the needs of affected communities and coordinate aid.
Both teams are also collaborating to ensure that our data science workflows and models are published openly and can be used by others in the field. Inspired by initiatives such as The Turing Way, we are formalising and adopting best practices to write high quality code, document methodologies, and reproduce results.
At the end of the first year of our collaboration, it is gratifying to reflect on how much we have been able to achieve together while learning from each other and expanding our collective knowledge. We’re grateful to the German Federal Foreign Office for making this work possible by funding our Data Scientist role. We’re looking forward to continuing to work together to push forward the boundaries of humanitarian data science.
What is MapAction’s ‘humanitarian data landscape?’ At MapAction, we’re working to put data at the centre of how we provide products and services to the humanitarian sector. MapAction’s data scientist, Monica Turner, recently posted about the work she does in this new role. However, data is a big (and sometimes loaded) term. So what does ‘data’ mean to MapAction? We asked Hannah Ker, MapAction’s Data Scientist whilst Monica is on maternity leave, to explain.
During a humanitarian crisis, it is vitally important for responders to have information such as which areas are most affected, where vulnerable populations exist and where relevant infrastructure & services (such as healthcare facilities) are located. MapAction provides information products (such as maps) to our partners to help them address these information needs. Unsurprisingly the vast majority of data that we work with at MapAction is geospatial. We aim to use geospatial techniques, such as cartography, to make complex data rapidlly accessible to those responding to humanitarian crises.
The ‘Layers of data’ page (see diagram below) from our Example Product Catalogue provides a useful framework for thinking about how many different datasets are processed and combined into a meaningful final product.
Firstly, we can think of the data that is input to our basemap or initial reference map of a given area. This data often reflects features such as administrative boundaries, land elevation, settlements, and transportation infrastructure. Secondly, we have baseline data that provides demographic information about the area of interest, such as population numbers and numbers of schools.
Our last data layer includes situational information that is relevant to the humanitarian context at hand. The kinds of data relevant for this layer can vary significantly depending on the circumstances. This data is also likely to be the most dynamic and temporally sensitive. For example, it may be used to show change over time as a crisis evolves.
All of this data can come from a variety of sources. The Humanitarian Data Exchange (HDX), developed and maintained by the UN OCHA Centre for Humanitarian Data, is a repository that holds over 17,000 datasets from more than 1,300 different sources. These datasets come from what we might think of as ‘authoritative’ sources of information, such as the World Bank or the World Food Programme.
In particular, MapAction frequently uses the Common Operational Datasets of Administrative Boundaries (COD-AB) that are published and maintained by the UN Office for the Coordination of Humanitarian Affairs (OCHA). It can be challenging to access complete and up-to-date administrative boundary data, so the CODs attempt to provide standardised, high quality data that can be used to support humanitarian operations.
OpenStreetMap (OSM) also provides a valuable source of geospatial data. This ‘Wikipedia of maps’ is an entirely crowdsourced map of the world. In theory, anyone, anywhere in the world (with an internet connection) can contribute to OSM. At MapAction, we use OSM as a source of data for features such as settlements and transportation infrastructure. MapAction is a partner of the Missing Maps project, hosted by OSM which seeks to crowd source the gaps in maps in available maps.
So why can’t we just use maps that already exist, like Google Maps?, one might ask. Why all these complex data layers? Why spend so much time finding data when it’s already all there?
Platforms such as Google Maps, Waze, and Apple Maps are commonly used as day-to-day navigation tools for people in many parts of the world. However, such existing tools do not provide the flexibility that is often required when managing and presenting geospatial data in humanitarian scenarios. As these tools are privately-developed, individuals and organisations do not always have the ability to manipulate or style the underlying data to suit their needs. These platforms were not created specifically for humanitarian use-cases, and so may not always include the information that meets the operational requirements of humanitarian contexts, such as locations of damaged buildings or the extent of a flood.
OSM’s Humanitarian map style, for example, shows some of the unique data styling that may be required in humanitarian contexts. Moreover, there are many parts of the world with human settlements that are not present (or poorly represented) on existing maps, as is demonstrated by efforts from organisations such as the Humanitarian OpenStreetMap Team and the Missing Maps initiative. These challenges mean that there is no existing ‘one size fits all’ mapping platform that is capable of providing and presenting all of the information that is needed in humanitarian contexts.
Finding high quality geospatial data is an ongoing challenge for us at MapAction. Geospatial data quality is a multifaceted concept, and includes dimensions such as up-to-dateness, positional accuracy, logical consistency, and completeness. The image below, for example, shows a geometry problem that we often face with administrative boundary data. Notice the gap in the border between Chad and the Central African Republic. Lack of standardisation in this data between different countries and organisations, or out of date data can result in such misalignment. Due to the political sensitivity that is associated with boundary data, it is important to ensure that the data that we use is as accurate as possible.
Our ongoing work around the Moonshot project seeks to develop tools that can help us to automatically detect and address quality issues such as these. Keep an eye out for future blog posts where we will address some of these technical challenges in greater detail.
At the end of the day, we’re working to make complex situations better understood. Humanitarian crises are incredibly complex, and accordingly, can be associated with complex datasets and information. By selecting high quality datasets and visualising them in clear and accessible ways, we intend for our humanitarian partners to be able to make informed decisions and deliver effective aid to those in need.
MapAction’s Data Scientist is funded by the German Federal Foreign Office (GFFO), but the views and opinions above do not necessarily represent those of the GFFO.
MapAction has been collaborating for a number of years with French NGO and fellow humanitarian information management specialists CartONG.
In addition to our operational activities, we thought it would be worthwhile to pool our collective knowledge to create an informative article. The ten-minute read aims to give some helpful tips for people creating maps intended to assist humanitarian responses to the Corona virus and other pandemics.
Between us, we have a lot of experience of using geospatial analysis and visualisations to inform decision-making in this and previous epidemics, such as Ebola, as well as the current pandemic. We wanted to share this knowledge more widely and felt that, by working together, we could create something really useful and reach more people. Although it was written with pandemics in mind, many of the points apply to all kinds of map making.
You can read the article on the CartONG blog below.
This project was co-funded by the French Development Agency (AFD) and the H2H Network’s H2H Fund, the latter supported by UK aid from the UK government.
Although data science is still a relatively new field, its potential for the humanitarian sector is vast and ever-changing. We caught up with one of MapAction’s Data Scientists Monica Turner to discover how data science is evolving, the impact of COVID-19 on her work and how predictive modelling could see disaster funding being released before a disaster has occurred.
Interview by Karolina Throssell, MapAction Communications Volunteer
How did you get into data science?
I have a background in Astrophysics but wanted to transition into data science, so I started volunteering with 510 global which is part of the Netherlands Red Cross. This was my first experience in the humanitarian sector, and I was immediately hooked. After working briefly as a data scientist at a technology company, I began working at MapAction in March 2020. As part of my work, I am seconded to the Centre for Humanitarian Data in the Hague, which is managed by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA).
What is the role of data science at MapAction?
Even though one of MapAction’s primary products is maps, these are created by combining different data sets. So, while the explicit presence of a data scientist at the organisation is new, MapAction has fundamentally always been doing data science on some level. With this new role, the hope is to both formalise the current data science practices, and expand our analytical capability, ultimately shifting our role from data consumer to having an active role in the development and improvement of humanitarian data sets.
As a data scientist, you often have to wear many hats – from data cleaning to model development to visualisation. With the Moonshot project, we are looking to automate the creation of seven to nine key maps for 20 countries. One of my first tasks is to design and build a pipeline that downloads, transforms, and checks the quality of all the different data sets that make up these key maps. The details of this pipeline will be the subject of a future blog post.
How has COVID-19 impacted on your work?
One of MapAction’s strengths is the field work that we are able to do during an emergency as well as the remote support we provide. However, as COVID-19 has limited the ability to travel, the paradigm has shifted and we need to rethink how we respond to emergencies overall. In particular, we are working to expand the types of products that we offer to our partners, as the demand increases for more remote-oriented products such as web-based dashboards.
At the Centre for Humanitarian data, in collaboration with the Johns Hopkins Applied Physics Laboratory, we’ve been developing a model relating to the spread of COVID, to help low- and middle-income countries plan their responses.
One of the main challenges of modelling COVID-19 is the novelty of the disease. Since there is no historical data, model validation becomes much more challenging. Additionally, the number of cases and deaths is a crucial input to the model. With higher income countries, more testing is done so the data we need is there, however the availability and quality of this data in low- and middle-income countries poses a further hurdle. Nevertheless, even with these caveats it is still very valuable to provide low- and middle-income countries with a tailored scenario-building tool for developing their COVID response.
Where is data science heading?
Predictive analytics will play a much larger role in the future of data science. The UN is currently working on a huge project to provide funding for predictive models that will enable it to release funding from the Central Emergency Response Fund (CERF), to help communities prepare and protect themselves from disasters before they occur. After a successful pilot project in Bangladesh, we plan to extend our model validation to other types of disasters such as cholera and food insecurity.
At MapAction, the Moonshot will lead a shift towards preparedness and enable us to develop methods to assess the completeness and quality of the data going into our maps. Our hope is that with this emphasis on data analysis, we will be able to provide meaningful contributions to a wide array of humanitarian data sets. Additionally, we are hoping to build an analytics team, and will be recruiting data science volunteers in early 2021, so check our website and sign up to our newsletter to find out how you can apply. And if you can contribute in other ways to our data science work, please contact us!
Since March this year, a MapAction data scientist has been based at the Centre for Humanitarian Data in The Hague, supporting its workstream on predictive analytics. The aim of this important work is to forecast humanitarian emergencies and needs in order to trigger responses before a disaster occurs.
One of the projects the Centre’s predictive analytics team is working on, in partnership with the John Hopkins University Applied Physics Laboratory and individual country offices of the UN’s Office for the Coordination of Humanitarian Affairs (OCHA), is the development of COVID-19 modelling tailored for each country’s specific context. This seeks to predict the scale, severity and duration of the outbreak within each country, including its likely effects on particularly vulnerable groups, such as people at risk of hunger or those using solid fuel indoors for cooking.
The project is also modelling the effects of non-pharmaceutical interventions (NPIs) such as curfews, travel bans and face masks, according to what is locally viable.
The inclusion of country-specific factors, looking at projections for specific vulnerable groups as well as the general population at a sub national level, can make this work particularly helpful for governments and humanitarian organisations to inform their COVID planning.
The initial model was developed for Afghanistan and is now being extended to other priority countries including the Democratic Republic of the Congo, South Sudan and Sudan.
MapAction has formed a new partnership with the German Federal Foreign Office (GFFO) Humanitarian Assistance to help improve the use of technology and data in humanitarian decision making.
As part of the broad-ranging programme, MapAction is working on
greatly reducing the time and effort required to create maps and data products needed
in many emergencies, by automating repeat processes. It is also extending its
capacity to have specialist personnel in emergency situations for longer
periods to support information management and decision-making processes, and
placing a data scientist in the Centre for Humanitarian Data in The Hague to
facilitate knowledge sharing.
MapAction Chief Executive Liz Hughes said, “This is an exciting
programme which will help to keep us at the vanguard of humanitarian response
missions, but also, vitally, to overhaul our technical offer. This will enable
us to continue to help ensure the best possible outcomes for people affected by
disasters and humanitarian emergencies. We are very pleased to be working with
GFFO and looking forward very much to getting stuck in to this important work together.”
This week, a MapAction volunteer has been participating in discussions and strengthening relationships with our partners at the International Search and Rescue Advisory Group (INSARAG) Information Management Working Group and Team Leaders meetings in Santiago, Chile. These conversations enable us to continuously improve how we visualise data collected by INSARAG teams.