High-speed loading of structured FieldsFiles.

Deprecated since version 1.10: This module has now been deprecated. Please use instead.

In this module:

iris.experimental.fieldsfile.load(filenames, callback=None)

Load structured FieldsFiles and PP files.


  • filenames:

    One or more filenames.


  • callback:

    A modifier/filter function. Please see the module documentation for iris.


    Unlike the standard iris.load() operation, the callback is applied to the final result cubes, not individual input fields.

An iris.cube.CubeList.

This is a streamlined load operation, to be used only on fieldsfiles or PP files whose fields repeat regularly over the same vertical levels and times. The results aim to be equivalent to those generated by iris.load(), but the operation is substantially faster for input that is structured.

The structured input files should conform to the following requirements:

  • the file must contain fields for all possible combinations of the vertical levels and time points found in the file.

  • the fields must occur in a regular repeating order within the file.

    (For example: a sequence of fields for NV vertical levels, repeated for NP different forecast periods, repeated for NT different forecast times).

  • all other metadata must be identical across all fields of the same phenomenon.

Each group of fields with the same values of LBUSER4, LBUSER7 and LBPROC is identified as a separate phenomenon: These groups are processed independently and returned as separate result cubes.


Each input file is loaded independently. Thus a single result cube can not combine data from multiple input files.


The resulting time-related coordinates (‘time’, ‘forecast_time’ and ‘forecast_period’) may be mapped to shared cube dimensions and in some cases can also be multidimensional. However, the vertical level information must have a simple one-dimensional structure, independent of the time points, otherwise an error will be raised.


Where input data does not have a fully regular arrangement, the corresponding result cube will have a single anonymous extra dimension which indexes over all the input fields.

This can happen if, for example, some fields are missing; or have slightly different metadata; or appear out of order in the file.


Any non-regular metadata variation in the input should be strictly avoided, as not all irregularities are detected, which can cause erroneous results.