Frequently Asked Questions#
Note
This page is under construction. We continue to refine our answers to the questions already on the list and will be adding more entries in the future.
Data#
Can I replace the LUISA landcover data with another land use dataset?
Yes, but expect the workflows to require adaptation to the resulting changes to the land use classification. E.g., when mapping monetary value to land use classes and computing damages based on a vulnerability curve you may have to edit/remove/add entries in the tables for values and damages to match the classes of your substituted dataset.
How can I create a custom damage table for the risk assessment?
We recommend to start with the provided table and adapt the GDP per capita value first of all. From there, you can edit the table anyway you like to obtain an acccurate representation of your region. If you find that the preconfigured formulas in the tables do not lead to representative values for your area of interest, you can change the formulas in addition to their parameters or prescribe values directly instead of computing them via a formula.
The preconfigured formulas in our tables were fitted to global data and constructed to provide a consistent estimation of values/damages across the globe. See the report of Huizinga et al. (2017) and our related overview in the Handbook for more information on the values and formulas in the damage tables.
How can I best work with population data when a high-density city dominates the surrounding low-density areas?
We recommend to control the scaling of outputs to receive meaningful results. This could, e.g., mean manually setting the bins for risk categories when using a risk index method such that information from less densely populated regions are retained. You can further consider creating separate outputs for low- and high-density regions.
A logarithmic scaling of results during visualization can also help to retain information across multiple orders of magnitude. However, care must be taken when interpreting logarithmic data since the quantitative relationship between a linear increase in darkness/intensity along a colorbar and the values represented by those colors is not proportional anymore, which can lead to misinterpretation.
How can I import results from a workflow into other software (e.g., GIS) for further processing?
Many of our workflows are configured to present their results not just as plots in the notebooks, but also to write them to disk in a georeferenced format for further processing. This is usually a NetCDF file with latitude-longitude coordinates or a GeoTIFF file with an attached coordinate reference system.
If a workflow result is not currently exported by a workflow, we recommend to use the xarray and rioxarray packages to export georeferenced information.
Risk workflows#
Can I use the maps for the future hazard of the river floods workflow to assess damages?
Technically yes, but the data resolution is too coarse for us to give a recommendation to use them in the damage assessment.
We instead suggest to consider damage estimation approaches based on the high-resolution historical data, e.g., by considering scenarios like “what if the 20 year return period today becomes the 10 year return period tomorrow?” and evaluating the likelyhood of these scenarios based on the coarser-resolution future data.
When multiple rivers flow through my area of interest should I consider them together or separately in the river flooding workflow?
In the workflow and flood maps used as input to the workflow they are considered together. However, for a more detailed analysis it could indeed be interesting to look at each river’s catchment separately.
This can be implemented, e.g., by clipping the area of interest based on the shape of the catchment (CLIMAAX/FLOODS#8).
How can I apply the satellite-based heatwave risk assessment to regions larger than the maximum size allowed by the RSLab data portal?
Land surface temperature data for regions larger than allowed for download by the RSLab data portal can be computed from Landsat Collection 2 Level 1 imagery, e.g., with the pylandtemp package.
A proposed workflow addition for downloading the required satellite images from a USGS data portal and computing the land surface temperature can be found in the pull request CLIMAAX/HEATWAVES#15.
How should I best choose the digital elevation model (DEM) required by the Wildfire (ML approach) workflow?
The reference DEM is important in the workflow as it defines the reference grid to interpolate all other input data to. We recommend to choose data with a resolution between 500 and 50 m for best results. Increasing the resolution further will lead to diminishing returns for the workflow’s output, also because the preconfigured climate datasets with 1 km resolution limit further improvements.