Map settings

no selection

Download data for () as: JSON

Model Bias

The bias represents the difference between the observed climate and the model representation over the historical record.

Here, bias is calculated by comparing the annual observed values and the corresponding ones simulated by climate models averaged over the 1971-2000 period. For temperature, the observed values are simply subtracted from the model estimations. For precipitation, data from both sources is first aggregated as the annual cumulative rainfall, then the bias is computed as a percentage difference. Bias is calculated for individual grid points at the climate model resolution (i.e., ≃11km) then spatially aggregated to the NUTS2 level by taking the median for each region.

The bias calculation is available for all the EURO-CORDEX regional climate models used in the CLIMAAX workflows. The EOBS and ERA5 reanalysis datasets are used as sources for the observed climate. The user is referred to Vautard et al. (2021) for further reading on EURO-CORDEX bias.

Models with smallest bias in each variable

Temperature: no selection

Precipitation: no selection

Model Bias: Temperature Percentiles

The temperature bias shown in the figures above is the bias of the mean temperature over the 1971-2000 period. While this is a good summary-indicator of model bias, it is not necessarily a good indicator of model bias for extreme events. However, the extremes of the temperature distribution are often the most relevant when analysing model projections in the context of climate risk assessment.

Here, the biases of each model are shown for individual percentiles of the temperature distribution. Each row of the plot represents one model, the color represents the bias (red: model has warm bias, blue: model has cold bias; hover to see the values). The low percentiles of the distribution correspond to cold extremes, while high percentiles of the distribution correspond to warm extremes.

How to read the patterns: two examples

If a row is all the same color, the model has a consistent bias across all values. E.g., if everything is blue, the model is always too cold.

If a row starts red and transitions to blue for the high percentiles, the model has less temperature variability than the reference dataset. Cold extremes are warmer than they should be, warm extremes are colder than they should be.

Model Bias: Precipitation Percentiles

The precipitation bias shown in the figures above is the bias of the mean yearly total precipitation over the 1971-2000 period. While this is a good summary-indicator of model bias, it is not necessarily a good indicator of model bias for extreme events. However, the extremes of the precipitation distribution are often the most relevant when analysing model projections in the context of climate risk assessment.

Here, the biases of each model are shown for individual percentiles of the yearly total precipitation distribution. Each row of the plot represents one model, the color represents the bias (green: model has wet bias, brown: model has dry bias; hover to see the values). The high percentiles of the distribution correspond to heavy precipitation extremes.

Precipitation bias as percentage

The precipitation bias is given as a percentage of the reference precipitation (relative bias). Values indicating dry bias are limited to at most -100%, where the model predicts no precipitation at all. In contrast, the percentages for wet biases are not limited and can exceed 100%. If there is no or only very little precipitation in a reference percentile, the bias percentage becomes very large even for small model biases in absolute terms. Examples for these limitations can be seen, e.g., in the low percentiles for Cyprus (CY00).

Model Uncertainty

The model bias can only be evaluated for the past, where a reference dataset for comparison is available. A measure of model uncertainty, the variability of possible outcomes predicted for the future climate, is obtained by comparing the models to each other within the ensemble without a reference dataset. Generally, the more models are involved in the comparison, the more reliable the estimate of uncertainty becomes.

Model uncertainty is here represented by the spread of the model ensemble relative to the ensemble mean an individual model projection or in absolute terms (select with the dropdown below). The range between the most extreme predictions of the ensemble is a simple measure of the uncertainty. The position of a selected reference model within that range tells us on which side of the values we can expect more uncertainty. For example, a model in the lower part of the temperature uncertainty range has more uncertainty towards warmer temperatures than colder temperatures.

The average uncertainty range was evaluated for temperature and precipitation over the historical (1986-2005) and four future time periods (2021-2040, 2041-2060, 2061-2080, and 2081-2100) based on monthly data without bias correction for the NUTS2 regions and for three RCP scenarios (select below; not all models available for all scenarios).

Scenario Show the uncertainty range relative to

Model bias dominates the uncertainty

There is a general tendency for the uncertainty to increase as the projections go into the future, but the uncertainty range is already large for the historical period. This implies that a large contribution to the model uncertainty comes from the biases of the models between each other.

Majid Niazkar ORCID iD icon, Andrea Rivosecchi ORCID iD icon and Lisa Ferrari ORCID iD icon

The authors acknowledge contributions from Muhammad Faizan Aslam ORCID iD icon for help with data processing.