# iris.fileformats.um¶

At present, the only UM file types supported are true FieldsFiles and LBCs. Other types of UM file may fail to load correctly (or at all).

In this module:

iris.fileformats.um.um_to_pp(filename, read_data=False, word_depth=None)

Extract individual PPFields from within a UM Fieldsfile-like file.

Returns an iterator over the fields contained within the FieldsFile, returned as iris.fileformats.pp.PPField instances.

Args:

• filename (string):
Specify the name of the FieldsFile.

Kwargs:

Specify whether to read the associated PPField data within the FieldsFile. Default value is False.
Returns:
Iteration of iris.fileformats.pp.PPFields.

For example:

>>> for field in um.um_to_pp(filename):
...     print(field)


↑ top ↑

iris.fileformats.um.load_cubes(filenames, callback, constraints=None, _loader_kwargs=None)

Loads cubes from filenames of UM fieldsfile-like files.

Args:

• filenames - list of filenames to load

Kwargs:

Note

The resultant cubes may not be in the order that they are in the file (order is not preserved when there is a field with orography references).

↑ top ↑

iris.fileformats.um.load_cubes_32bit_ieee(filenames, callback, constraints=None)

Loads cubes from filenames of 32bit ieee converted UM fieldsfile-like files.

load_cubes() for keyword details

↑ top ↑

iris.fileformats.um.structured_um_loading()

Load cubes from structured UM Fieldsfile and PP files.

“Structured” loading is a streamlined, fast load operation, to be used only on fieldsfiles or PP files whose fields repeat regularly over the same vertical levels and times (see full details below).

This method is a context manager which enables an alternative loading mechanism for ‘structured’ UM files, providing much faster load times. Within the scope of the context manager, this affects all standard Iris load functions (load(), load_cube(), load_cubes() and load_raw()), when loading from UM format files (PP or fieldsfiles).

For example:

>>> import iris
>>> filepath = iris.sample_data_path('uk_hires.pp')
...
>>> cube
<iris 'Cube' of air_potential_temperature / (K) (time: 3; model_level_number: 7; grid_latitude: 204; grid_longitude: 187)>


The results from this are normally equivalent to those generated by iris.load(), but the operation is substantially faster for input which is structured.

For calls other than load_raw(), the resulting cubes are concatenated over all the input files, so there is normally just one output cube per phenomenon.

However, actual loaded results are somewhat different from non-structured loads in many cases, and in a variety of ways. Most commonly, dimension ordering and the choice of dimension coordinates are often different.

When a user callback function is used with structured-loading, it is called in a somewhat different way than in a ‘normal’ load : The callback is called once for each basic structured cube loaded, which is normally the whole of one phenomenon from a single input file. In particular, the callback’s “field” argument is a FieldCollation, from which “field.fields” gives a list of PPFields from which that cube was built, and the properties “field.load_filepath” and “field.load_file_indices” reference the original file locations of the cube data. The code required is therefore different from a ‘normal’ callback. For an example of this, see this example in the Iris test code.

Notes on applicability:

For results to be correct and reliable, the input files must conform to the following requirements :

• the file must contain fields for all possible combinations of the vertical levels and time points found in the file.

• the fields must occur in a regular repeating order within the file, within the fields of each phenomenon.

For example: a sequence of fields for NV vertical levels, repeated for NP different forecast periods, repeated for NT different forecast times.

• all other metadata must be identical across all fields of the same phenomenon.

Each group of fields with the same values of LBUSER4, LBUSER7 and LBPROC is identified as a separate phenomenon: These groups are processed independently and returned as separate result cubes. The need for a regular sequence of fields applies separately to the fields of each phenomenon, such that different phenomena may have different field structures, and can be interleaved in any way at all.

Note

At present, fields with different values of ‘LBUSER5’ (pseudo-level) are also treated internally as different phenomena, yielding a raw cube per level. The effects of this are not normally noticed, as the resulting multiple raw cubes merge together again in a ‘normal’ load. However, it is not an ideal solution as operation is less efficient (in particular, slower) : it is done to avoid a limitation in the underlying code which would otherwise load data on pseudo-levels incorrectly. In future, this may be corrected.

Known current shortcomings:

• orography fields may be returned with extra dimensions, e.g. time, where multiple fields exist in an input file.
• if some input files contain a single coordinate value while others contain multiple values, these will not be merged into a single cube over all input files : Instead, the single- and multiple-valued sets will typically produce two separate cubes with overlapping coordinates.
• this can be worked around by loading files individually, or with load_raw(), and merging/concatenating explicitly.

Note

The resulting time-related coordinates (‘time’, ‘forecast_time’ and ‘forecast_period’) may be mapped to shared cube dimensions and in some cases can also be multidimensional. However, the vertical level information must have a simple one-dimensional structure, independent of the time points, otherwise an error will be raised.

Note

Where input data does not have a fully regular arrangement, the corresponding result cube will have a single anonymous extra dimension which indexes over all the input fields.

This can happen if, for example, some fields are missing; or have slightly different metadata; or appear out of order in the file.

Warning

Restrictions and limitations:

Any non-regular metadata variation in the input should be strictly avoided, as not all irregularities are detected, which can cause erroneous results.

Various field header words which can in some cases vary are assumed to have a constant value throughout a given phenomenon. This is not checked, and can lead to erroneous results if it is not the case. Header elements of potential concern include LBTIM, LBCODE, LBVC and LBRSVD4 (ensemble number).

↑ top ↑

An object representing a group of UM fields with array structure that can be vectorized into a single cube.

For example:

Suppose we have a set of 28 fields repeating over 7 vertical levels for each of 4 different data times. If a BasicFieldCollation is created to contain these, it can identify that this is a 4*7 regular array structure.

This BasicFieldCollation will then have the following properties:

• within ‘element_arrays_and_dims’ :
Element ‘blev’ have the array shape (7,) and dims of (1,). Elements ‘t1’ and ‘t2’ have shape (4,) and dims (0,). The other elements (lbft, lbrsvd4 and lbuser5) all have scalar array values and dims=None.

Note

If no array structure is found, the element values are all either scalar or full-length 1-D vectors.

class iris.fileformats.um.FieldCollation(fields, filepath)

Bases: iris.fileformats.um._fast_load_structured_fields.BasicFieldCollation

Args:

core_data()
bmdi
data
data_field_indices

Field indices of the contained PPFields in the input file.

This records the original file location of the individual data fields contained, within the input datafile.

Returns:
An integer array of shape self.vector_dims_shape.
data_filepath
data_proxy
element_arrays_and_dims

Value arrays for vector metadata elements.

A dictionary mapping element_name: (value_array, dims).

The arrays are reduced to their minimum dimensions. A scalar array has an associated ‘dims’ of None (instead of an empty tuple).

fields
realised_dtype
vector_dims_shape

The shape of the array structure.