Setting up the plugin

The plugin can be configured as follows:

module load clint regiklim-ces
freva plugin climpact --doc

Climpact (v2022.9.1): This plugin processes multi dimensional climate model data. The output of this plugin can be used for various impact model simulations. The plugin selects a region of interest as defined by a geojson file and calculates a time-series of area weighted field means over the defined region. Options: shape_file (default: <undefined>)

Select a shape resource file defining your region of interest, if None is selected (default), the whole region, that is defined in the climate data will be taken. Note: You can either select a path on on the HPC system, or a web url pointing to the file.

region_name (default: <undefined>)

Select a pre defined German municipality of interest. This selection has only effect if you don’t chose a shape file.

project (default: <undefined>)

Select the climate parent project, to refine the model search

product (default: <undefined>)

Select the climate parent product, to refine the model search

experiment (default: <undefined>) [mandatory]

Choose a climate model experiment.

models (default: <undefined>) [mandatory]

Select the climate model(s). You can select multiple models, to get output for multiple models at once.

time_frequency (default: day) [mandatory]

Select the climate model output time frequency.

variable (default: <undefined>) [mandatory]

Set the variable name(s) of interest.

start (default: <undefined>)

Set the first timestep of the climate data that should be processed. If empty (default) the first time step of climate data is taken.

end (default: <undefined>)

Set the last timestep of the climate data that should be processed. If empty (default) the last time step of climate data is taken.

output_file_type (default: nc)

Select the filetype of the plugin output.

mask_type (default: none)

Surface type that should be MASKED (none: no mask is applied land: all land areas are masked, sea: all water areas are masked

mask_method (default: centres)

How the masked region is determined. centres selects points where only gird-cell centres are within the polygone of the shape region. corners selects grid- cells where any of the 4 gird-cell corners lie within the polygone of the shape region, select corners if you want to be the region selection less restrictive.

plot_map (default: True)

Plot a map of each selected variable and model to check the projection and masking of the region.

pre_process_module (default: <undefined>)

If you whish to apply your own processing routine set the path to the python file where the routine routine is defined. If not set (default) a time series of the selected variables will be calculated.

pre_process_function (default: pre_proc)

Set the name of the function defining the pre processing routine. This option comes only into affect if the pre_process_module option is set.

extra_scheduler_options (default: –ntasks-per-node=20)

Set additional options for the job submission to the workload manager (, seperated). Note: batchmode and web only.

If you want to process output from multiple climate models you can choose multiple model names.

Note: The plugin always processes all available ensemble members within the current data stack for each model

The processed region

The user has to define a model region that is processed by the plugin. This region is defined in a shape or geojson file. It can either be located on on the HPC system or on a publicly available url. Such as a website, a dropbox cloud store etc. Ideally the region should be defined in one file. If you have multiple files, for example a file defining the border polygones and another defining the projection you should zip those files into one common zip file.

Pre processing

In its default configuration the plugin will apply a mask, as defined by the input region, and calculated a field average of this defined region. Everything that is outside the defined region will be neglected.

If more complicated pre processing is desired, for example for deriving additional variables, you can define your own pre processing routines using the pre_process_module and pre_process_function plugin arguments. This user defined module have to be written in the python programming language and are executed by the plugin instead of the default field mean method.

pre_process_module

Path to the file that contains the function that is used for pre processing. The plugin will add the parent folder of this file name to the global python path. Hence you can import any code within this module.

pre_process_module

The name of the function (as defined in the pre_process_module file) that applies the pre processing instead of the default method (field average).

The function is expected to exactly get one argument, an xarray.Dataset dataset holding all climate data, and return an xarray.Dataset. See also the next sub section for how such a function might look like.

Note: After this method has been called no additional processing is done to the data. Instead the data is saved to disk.

Example

This example shows how a user defined pre processing method might look like. In this example we are going to use the metpy library to calculate the lifted condensation level from temperature, dew point and pressure.

Suppose the function is saved to a file called /home/m/myuser/preproc/calc_lcl.py, where this name of the function that is supposed to do the processing is lcl. Then the plugin arguments for pre_process_module should be /home/m/myuser/preproc/calc_lcl.py while pre_process_function should be equal to lcl.

The lcl function should expect exactly one arguemnt, and return one xarray Dataset:

def lcl(input_data: xr.Dataset) -> xr.Dataset:
    """Calculate the lifted condenstation level with help of metpy.

    Parameters:
    -----------
    input_data:
        Dataset holding data arrays for surface temperature, surface dewpoint and surface pressure
    Returns:
    --------
    Lcl added dataset
    """
    import xarray as xr
    import metpy.cacl
    # Use metpy to calculate the lcl
    lcl_pres, lcl_temp = metpy.calc.lcl(dset.ps, dset.ts, dset.tdps)
    # Calculate the hight of the LCL in km
    lcl_in_km = metpy.calc.pressure_to_height_std(lcl_pres)
    dims = dset.ts.dims
    # Create an xarray DataArray from the lcl height
    lcl_data = xr.DataArray(
        lcl_in_km.magnitude,
        coords=dset.ts.coords,
        dims=dims,
        name="lcl",
        attrs=dset.ts.attrs
    )
    # Attributes from the surface temperature have been added, adjust them to lcl specific ones
    lcl_data.attrs["units"] = "km"
    lcl_data.attrs["standard_name"] = "lcl"
    lcl_data.attrs["long_name"] = "Lifted condensation level"
    lcl_data.attrs.pop("code", "")
    # Create a new dataset
    new_data = xr.Dataset
    new_data.attrs = dset.attrs
    new_data["lcl"] = lcl_data
    # Now calculate and return the field averages (last two dimnsions of
    return new_data.mean(dim=dims[-2:])