Education Infrastructure#

In this notebook, we will perform a damage and risk assessment for education infrastructure. The assessment is based on combining hazard data (e.g., flood depths) with vulnerability curves to estimate the potential damage to infrastructure.

We will follow the steps outlined below to conduct the assessment:

  1. Loading the necessary packages:
    We will import the Python libraries required for data handling, analysis, and visualization.

  2. Loading the data:
    The infrastructure data (e.g., secondary schools) and hazard data (e.g., flood depths) will be loaded into the notebook.

  3. Preparing the data:
    The infrastructure and hazard data will be processed and data gaps can be filled, if required.

  4. Performing the damage assessment:
    We will apply vulnerability curves to estimate the potential damage based on the intensity of the hazard.

  5. Visualizing the results:
    Finally, we will visualize the estimated damage using graphs and maps to illustrate the impact on education infrastructure.

1. Loading the Necessary Packages#

To perform the assessment, we are going to make use of several python packages.

In case you run this in Google Colab, you will have to install the packages below (remove the hashtag in front of them).

#!pip install damagescanner==0.9b14
#!pip install contextily
#!pip install exactextract
#!pip install lonboard

In this step, we will import all the required Python libraries for data manipulation, spatial analysis, and visualization.

import warnings
import xarray as xr
import numpy as np
import pandas as pd
import geopandas as gpd
import seaborn as sns
import shapely

import matplotlib.pyplot as plt
import contextily as cx

import damagescanner.download as download
from damagescanner.core import DamageScanner
from damagescanner.osm import read_osm_data
from damagescanner.base_values import DICT_CIS_VULNERABILITY_FLOOD

from lonboard import viz

warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=RuntimeWarning) # exactextract gives a warning that is invalid

Specify the country of interest#

Before we continue, we should specify the country for which we want to assess the damage. We use the ISO3 code for the country to download the OpenStreetMap data.

country_full_name = 'Jamaica'
country_iso3 = 'JAM'

2. Loading the Data#

In this step, we will load four key datasets:

  1. Infrastructure data:
    This dataset contains information on the location and type of education infrastructure (e.g., schools or universities). Each asset may have attributes such as type, length, and replacement cost.

  2. Hazard data:
    This dataset includes information on the hazard affecting the infrastructure (e.g., flood depth at various locations).

  3. Vulnerability curves:
    Vulnerability curves define how the infrastructure’s damage increases with the intensity of the hazard. For example, flood depth-damage curves specify how much damage occurs for different flood depths.

  4. Maximum damage values:
    This dataset contains the maximum possible damage (in monetary terms) that can be incurred by individual infrastructure assets. This is crucial for calculating total damage based on the vulnerability curves.

Infrastructure Data#

We will perform this example analysis for Jamaica. To start the analysis, we first download the OpenStreetMap data from GeoFabrik.

infrastructure_path = download.get_country_geofabrik(country_iso3)

Now we load the data and read only the healthcare data.

%%time
features = read_osm_data(infrastructure_path,asset_type='education')
CPU times: total: 25 s
Wall time: 32.7 s
sub_types = features.object_type.unique()
sub_types
array(['school', 'kindergarten', 'university', 'college', 'library'],
      dtype=object)

And we can explore our data interactively

viz(features)

Hazard Data#

For this example, we make use of the flood data provided by CDRI.

We use a 1/100 flood map to showcase the approach.

hazard_map = xr.open_dataset("https://hazards-data.unepgrid.ch/global_pc_h100glob.tif", engine="rasterio")
hazard_map
<xarray.Dataset> Size: 747GB
Dimensions:      (band: 1, x: 432000, y: 216000)
Coordinates:
  * band         (band) int32 4B 1
  * x            (x) float64 3MB -180.0 -180.0 -180.0 ... 180.0 180.0 180.0
  * y            (y) float64 2MB 90.0 90.0 90.0 90.0 ... -90.0 -90.0 -90.0 -90.0
    spatial_ref  int32 4B ...
Data variables:
    band_data    (band, y, x) float64 746GB ...

Maximum damages#

One of the most difficult parts of the assessment is to define the maximum reconstruction cost of particular assets. Here we provide a baseline set of values, but these should be updated through national consultations.

Locations of education facilities are (somewhat randomly) geotagged as either points or polygons. This matters quite a lot for the maximum damages. For polygons, we would use damage per square meter, whereas for points, we would estimate the damage to the entire asset at once. Here we take the approach of converting the points to polygons, and there define our maximum damages in dollar per square meter.

maxdam_dict = {'community_centre' : 1000, 
               'school' : 1000, 
               'kindergarten' : 1000, 
               'university' : 1000, 
               'college' : 1000,
               'library' : 1000
              }

To be used in our damage assessment, we convert this to a Pandas DataFrame

maxdam = pd.DataFrame.from_dict(maxdam_dict,orient='index').reset_index()
maxdam.columns = ['object_type','damage']

And check if any of the objects are missing from the dataframe.

missing = set(sub_types) - set(maxdam.object_type)

if len(missing) > 0:
    print(f"Missing object types in maxdam: \033[1m{', '.join(missing)}\033[0m. Please add them before you continue.")

Vulnerability data#

Similarly to the maximum damages, specifying the vulnerability curves is complex. We generally have limited information about the quality of the assets, its level of deteriation and other asset-level characteristics.

vulnerability_path = "https://zenodo.org/records/10203846/files/Table_D2_Multi-Hazard_Fragility_and_Vulnerability_Curves_V1.0.0.xlsx?download=1"
vul_df = pd.read_excel(vulnerability_path,sheet_name='F_Vuln_Depth').fillna(method='ffill')

And let’s have a look at all the available options

with pd.option_context('display.max_rows', None, 'display.max_colwidth', None):
  display(vul_df.iloc[:2,:].T)
0 1
ID number Infrastructure description Additional characteristics
F1.1 plant Small power plants, capacity <100 MW
F1.2 plant Medium power plants, capacity 100-500 MW
F1.3 plant Large power plants, >500 MW
F1.4 plant thermal plant
F1.5 plant wind turbine
F1.6 plant wind turbine
F1.7 plant wind turbine
F2.1 substation Low Voltage Substation
F2.2 substation Medium Voltage Substation
F2.3 substation High Voltage Substation
F5.1 cable Distribution circuits buried crossings
F6.1 Power (minor) line Distribution circuits (non-crossings)
F6.2 cable Distribution circuits elevated crossings
F6.3 Energy system Generalized curve for energy assets in diked areas
F7.1 Roads Roads
F7.2 Roads Roads
F7.2a (lower boundary) Roads Roads
F7.2b (upper boundary) Roads Roads
F7.3 Roads Roads
F7.4 Roads Motorways and trunk roads - sophisticated accessories - low flow
F7.5 Roads Motorways and trunk roads - sophisticated accessories - high flow
F7.6 Roads Motorways and trunk roads - without sophisticated accessories - low flow
F7.7 Roads Motorways and trunk roads - without sophisticated accessories - high flow
F7.8 Roads Other roads - low flow
F7.9 Roads Other roads - high flow
F7.10 Roads Roads
F7.11 Roads Roads
F7.12 Roads Roads
F7.13 Roads Roads
F8.1 Railways Double-tracked railway
F8.2 Railways Double-tracked railway
F8.3 Railways Railways
F8.4 Railways Railways
F8.5 Railways Railways
F8.6 Railways Railways
F8.6a (lower boundary) Railways Railways
F8.6b (upper boundary) Railways Railways
F8.7 Railways Railways
F8.8 Railways Railway station
F9.1 Airports Airports
F9.2 Airports Airports
F9.3 Airports Airports
F10.1 Telecommunication Communication tower
F12.1 Telecommunication Communication system
F13.1 water storage tanks Water storage tanks at grade concrete
F13.2 water storage tanks Water storage tanks at grade steel
F13.3 water storage tanks Water storage tanks at grade wood
F13.4 water storage tanks Water storage tanks elevated
F13.5 water storage tanks Water storage tanks below grade
F14.1 Water treatment plants Small water treatment plants open/gravity - average flood design
F14.2 Water treatment plants Medium water treatment plants open/gravity - average flood design
F14.3 Water treatment plants Large water treatment plants open/gravity - average flood design
F14.4 Water treatment plants small/medium/large water treatment plants open/gravity - above average flood design
F14.5 Water treatment plants small/medium/large water treatment plants open/gravity - below average flood design
F14.6 Water treatment plants Small water treatment plants closed/pressure
F14.7 Water treatment plants Medium water treatment plants closed/pressure
F14.8 Water treatment plants Large water treatment plants closed/pressure
F14.9 Water treatment plants small/medium/large water treatment plants closed/pressure - above average flood design
F14.10 Water treatment plants small/medium/large water treatment plants closed/pressure - below average flood design
F15.1 Water well Wells
F16.1 Transmission and distribution pipelines Exposed transmission pipeline crossing
F16.2 Transmission and distribution pipelines Buried transmission pipeline crossing
F16.3 Transmission and distribution pipelines Pipelines (non-crossing)
F17.1 Others Pumping plants (small) below grade
F17.2 Others Pumping plants (medium/large) below grade
F17.3 Others Pumping plants (small) above grade
F17.4 Others Pumping plants (medium/large) above grade
F17.5 Others Pumping plants
F17.6 Others Control vaults and stations
F18.1 Wastewater treatment plant Small wastewater treatment plants
F18.2 Wastewater treatment plant Medium wastewater treatment plants
F18.3 Wastewater treatment plant Large wastewater treatment plants
F18.4 Wastewater treatment plant Small, medium, large Wastewater treatment plants - above average flood design
F18.5 Wastewater treatment plant Small, medium, large Wastewater treatment plants- below average flood design
F18.6 Wastewater treatment plant Wastewater treatment plant
F19.1 Transmission and distribution pipelines Sewers & Interceptors: Exposed collector river crossings
F19.2 Transmission and distribution pipelines Sewers & Interceptors: Buried collector river crossings
F19.3 Transmission and distribution pipelines Sewers & Interceptors: Pipes (non-crossings)
F20.1 Others Control vaults and control stations
F20.2 Others Lift stations: lift station (small), wet well/dry well
F20.3 Others Lift stations: lift station (medium large), wet well/dry well
F20.4 Others Lift stations: lift station (small), submersible
F20.5 Others Lift stations: lift station (medium large), submersible
F21.1 Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.2 Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.3 Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.3a (lower boundary) Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.3b (upper boundary) Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.4 Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.4a (lower boundary) Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.4b (upper boundary) Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.5 Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.5a (lower boundary) Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.5b (upper boundary) Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.6 Education & Health Generalized curve for commercial buildings, which also includes schools and hospitals
F21.7 School buildings Generalized curve for companies (incl. government)
F21.8 Health & education Generalized curve for offices in diked areas
F21.9 Education & Health Generalized function for industry
F21.10 School buildings Generalized curve for buildings
F21.11 School buildings School buildings
F21.12 Health buildings Health buildings
F21.13 Education buildings Education buildings

And select a curve to use for each different subtype we are analysing.

sub_types
array(['school', 'kindergarten', 'university', 'college', 'library'],
      dtype=object)
selected_curves = dict(zip(sub_types,['F21.7','F21.7','F21.7','F21.7','F21.7','F21.7']))

The next step is to extract the curves from the database, and prepare them for proper usage into our analysis.

We start by selecting the curve IDs from the larger pandas DataFrame vul_df:

damage_curves = vul_df[['ID number']+list(selected_curves.values())]
damage_curves = damage_curves.iloc[4:125,:]

Then for convenience, we rename the index name to the hazard intensity we are considering.

damage_curves.set_index('ID number',inplace=True)
damage_curves.index = damage_curves.index.rename('Depth')  

And make sure that our damage values are in floating numbers.

damage_curves = damage_curves.astype(np.float32)

And ensure that the columns of the curves link back to the different asset types we are considering:

damage_curves.columns = sub_types

There could be some NaN values at the tail of some of the curves. To make sure the code works, we fill up the NaN values with the last value of each of the curves.

damage_curves = damage_curves.fillna(method='ffill')

Finally, make sure we set the index of the damage curves (the inundation depth) in the same metric as the hazard data (e.g. meters or centimeters).

damage_curves.index = damage_curves.index*100

But in some cases, we do not exactly know which curves we may want to use. As such, we can also run the analysis with all curves. To do so, we select all education-related curves from Nirandjan et al. (2024)’s database.

unique_curves = set([x for xs in DICT_CIS_VULNERABILITY_FLOOD['education'].values() for x in xs])

And create a dictionary that contains a pandas DataFrame for all object types, for each vulnerability curve.

multi_curves = {}
for unique_curve in unique_curves:

    curve_creation = damage_curves.copy()
    
    get_curve_values = vul_df[unique_curve].iloc[4:125].values
    
    for curve in curve_creation:
        curve_creation.loc[:,curve] = get_curve_values

    multi_curves[unique_curve] = curve_creation.astype(np.float32)

Ancilliary data for processing#

world = gpd.read_file("https://github.com/nvkelso/natural-earth-vector/raw/master/10m_cultural/ne_10m_admin_0_countries.shp")
world_plot = world.to_crs(3857)
admin1 = gpd.read_file("https://github.com/nvkelso/natural-earth-vector/raw/master/10m_cultural/ne_10m_admin_1_states_provinces.shp")

3. Preparing the Data#

Clip the hazard data to the country of interest.

country_bounds = world.loc[world.ADM0_ISO == country_iso3].bounds
country_geom = world.loc[world.ADM0_ISO == country_iso3].geometry
hazard_country = hazard_map.rio.clip_box(minx=country_bounds.minx.values[0],
                     miny=country_bounds.miny.values[0],
                     maxx=country_bounds.maxx.values[0],
                     maxy=country_bounds.maxy.values[0]
                    )

Convert point data to polygons.#

For some data, there are both points and polygons available. It makes life easier to estimate the damage using polygons. Therefore, we convert those points to Polygons below.

Let’s first get an overview of the different geometry types for all the assets we are considering in this analysis:

features['geom_type'] = features.geom_type
features.groupby(['object_type','geom_type']).count()['geometry']
object_type   geom_type   
college       MultiPolygon     13
              Point             8
kindergarten  MultiPolygon      4
              Point            12
library       MultiPolygon     14
              Point             4
school        MultiPolygon    457
              Point           128
university    MultiPolygon     20
              Point             2
Name: geometry, dtype: int64

The results above indicate that several asset types that are expected to be Polygons are stored as Points as well. It would be preferably to convert them all to polygons. For the Points, we will have to estimate a the average size, so we can use that to create a buffer around the assets.

To do so, let’s grab the polygon data and estimate their size. We grab the median so we are not too much influenced by extreme outliers. If preferred, please change .median() to .mean() (or any other metric).

polygon_features = features.loc[features.geometry.geom_type.isin(['Polygon','MultiPolygon'])].to_crs(3857)
polygon_features['square_m2'] = polygon_features.area

square_m2_object_type = polygon_features[['object_type','square_m2']].groupby('object_type').median()
square_m2_object_type
square_m2
object_type
college 57920.460301
kindergarten 849.080034
library 521.562639
school 1164.782393
university 418.091453

Now we create a list in which we define the assets we want to “polygonize”:

points_to_polygonize = ['college','kindergarten','library','school','university']

And take them from our data again:

all_assets_to_polygonize = features.loc[(features.object_type.isin(points_to_polygonize)) & (features.geometry.geom_type == 'Point')]

When performing a buffer, it is always best to do this in meters. As such, we will briefly convert the point data into a crs system in meters.

all_assets_to_polygonize = all_assets_to_polygonize.to_crs(3857)
def polygonize_point_per_asset(asset):

    # we first need to set a buffer length. This is half of width/length of an asset.
    buffer_length = np.sqrt(square_m2_object_type.loc[asset.object_type].values[0])/2

    # then we can buffer the asset
    return asset.geometry.buffer(buffer_length,cap_style='square')

collect_new_point_geometries = all_assets_to_polygonize.apply(lambda asset:  polygonize_point_per_asset(asset),axis=1).set_crs(3857).to_crs(4326).values
features.loc[(features.object_type.isin(points_to_polygonize)) & (features.geometry.geom_type == 'Point'),'geometry'] = collect_new_point_geometries

And check if any “undesired” Points are still left:

features.loc[(features.object_type.isin(points_to_polygonize)) & (features.geometry.geom_type == 'Point')]
osm_id geometry object_type geom_type

And remove the ‘geom_type’ column again, as we do not need it again.

features = features.drop(['geom_type'],axis=1)

5. Performing the Damage Assessment#

We will use the DamageScanner approach. This is a fully optimised damage calculation method, that can capture a wide range of inputs to perform a damage assessment. We will first perform this using only a single damage curve.

%%time
damage_results = DamageScanner(hazard_country, features, damage_curves, maxdam).calculate()
Overlay raster with vector: 100%|██████████████████████████████████████████████████████| 10/10 [00:03<00:00,  3.22it/s]
convert coverage to meters: 100%|███████████████████████████████████████████████████| 13/13 [00:00<00:00, 13543.46it/s]
Calculating damage: 100%|████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 4116.41it/s]
CPU times: total: 1.44 s
Wall time: 3.19 s

And run this also with the range of vulnerability curves.

%%time
all_results = DamageScanner(hazard_country, features, damage_curves, maxdam).calculate(multi_curves=multi_curves)
Overlay raster with vector: 100%|██████████████████████████████████████████████████████| 10/10 [00:02<00:00,  4.39it/s]
convert coverage to meters: 100%|███████████████████████████████████████████████████| 13/13 [00:00<00:00, 12261.29it/s]
Calculating damage: 100%|████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 4810.41it/s]
Calculating damage: 100%|████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 6509.01it/s]
Calculating damage: 100%|████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 8554.43it/s]
Calculating damage: 100%|████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 6499.70it/s]
Calculating damage: 100%|███████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 12517.44it/s]
Calculating damage: 100%|████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 6567.81it/s]
CPU times: total: 1.94 s
Wall time: 2.37 s

5. Save the Results#

For further analysis, we can save the results in their full detail, or save summary estimates per subnational unit

hazard = 'river_flood'
return_period = '1_100'
damage_results.to_file(f'Education_Damage_{country_full_name}_{hazard}_{return_period}.gpkg')
admin1_country = admin1.loc[admin1.sov_a3 == country_iso3]
damage_results = damage_results.sjoin(admin1_country[['adm1_code','name','geometry']])
admin1_damage = admin1_country.merge(damage_results[['name','damage']].groupby('name').sum(),
                                     left_on='name',
                                     right_on='name',
                                     how='outer')[['name','adm1_code','geometry','damage']]
admin1_damage.to_file(f'Admin1_Eduation_Damage_{country_full_name}_{hazard}_{return_period}.gpkg')

6. Visualizing the Results#

The results of the damage assessment can be visualized using charts and maps. This will provide a clear representation of which infrastructure is most affected by the hazard and the expected damage levels.

And create a distribution of the damages.

fig, ax = plt.subplots(1,1,figsize=(7, 3))

sns.histplot(data=damage_results,x='damage',ax=ax)
<Axes: xlabel='damage', ylabel='Count'>
../_images/83383c9ba2bcce1692f9803495fd18abda33c2bc74d6327c61bbc4d20c611b78.png

Plot location of most damaged healthcare facilities

fig, ax = plt.subplots(1,1,figsize=(10, 5))

subset = damage_results.to_crs(3857).sort_values('damage',ascending=False).head(20)
subset.geometry = subset.centroid
subset.plot(ax=ax,column='object_type',markersize=10,legend=True)
world_plot.loc[world.SOV_A3 == country_iso3].plot(ax=ax,facecolor="none",edgecolor='black')
cx.add_basemap(ax, source=cx.providers.CartoDB.Positron,alpha=0.5)
ax.set_axis_off()
../_images/d349acf192802b46330461c34b50adebddeb28e5ab21b591f78979adbcbb08d8.png
viz(damage_results)

7. Performing the Risk Assessment#

To do so, we need to select the return periods we want to include, and create a dictioniary as input. We will create this below.

return_periods = [2,5,10,50,100,200,500,1000] 

hazard_dict = {}
for return_period in return_periods:
    hazard_map = xr.open_dataset(f"https://hazards-data.unepgrid.ch/global_pc_h{return_period}glob.tif", engine="rasterio")

    hazard_dict[return_period] = hazard_map.rio.clip_box(minx=country_bounds.minx.values[0],
                     miny=country_bounds.miny.values[0],
                     maxx=country_bounds.maxx.values[0],
                     maxy=country_bounds.maxy.values[0]
                    )
    

And below we will run a multi-curve risk assessment.

risk_results = DamageScanner(hazard_country, features, damage_curves, maxdam).risk(hazard_dict,multi_curves=multi_curves)
Risk Calculation: 100%|██████████████████████████████████████████████████████████████████| 8/8 [00:21<00:00,  2.72s/it]
grouped_risks = risk_results[['object_type']+list(multi_curves.keys())].groupby('object_type').sum()
# Re-attempting with explicit assignment of the boxplot object
plt.figure(figsize=(8, 6))

# Creating the boxplot explicitly to access its elements
boxplot = grouped_risks.T.boxplot(column=['kindergarten', 'library', 'school'], grid=False, patch_artist=True, return_type='dict')

# Customizing box colors and ensuring all lines have the same color
colors = ['#ccd5ae', '#e9edc9', '#faedcd']
line_color = '#333333'  # Uniform line color for all components

# Customizing boxes
for box, color in zip(boxplot['boxes'], colors):  # Accessing box artists
    box.set_facecolor(color)
    box.set_edgecolor(line_color)
    box.set_linewidth(1.5)

# Customizing whiskers, caps, medians, and fliers
for whisker in boxplot['whiskers']:
    whisker.set_color(line_color)
    whisker.set_linewidth(1.5)

for cap in boxplot['caps']:
    cap.set_color(line_color)
    cap.set_linewidth(1.5)

for median in boxplot['medians']:
    median.set_color(line_color)
    median.set_linewidth(2)

for flier in boxplot['fliers']:
    flier.set_markerfacecolor(line_color)
    flier.set_markeredgecolor(line_color)

# Adding titles and labels
plt.title('Expected Annual Damage per object type', fontsize=16, fontweight='bold')
plt.ylabel('Total EAD (Log Scale)', fontsize=12)
plt.xlabel('Object Types', fontsize=12)
plt.yscale('log')  # Using log scale for better visualization of large range
plt.xticks(rotation=0, fontsize=10)
plt.yticks(fontsize=10)

# Adding gridlines for better readability
plt.grid(axis='y', linestyle='--', alpha=0.7)

# Display the boxplot
plt.show()
../_images/404ea4777eae9addb9f045031a07b9b14f8c5743e31179ca590e301be199b7de.png