Logo Iris 1.12

Previous topic

iris.fileformats.name_loaders

Next topic

iris.fileformats.nimrod

This Page

iris.fileformats.netcdf

Module to support the loading of a NetCDF file into an Iris cube.

See also: netCDF4 python.

Also refer to document ‘NetCDF Climate and Forecast (CF) Metadata Conventions’, Version 1.4, 27 February 2009.

In this module:

iris.fileformats.netcdf.load_cubes(filenames, callback=None)

Loads cubes from a list of NetCDF filenames/URLs.

Args:

  • filenames (string/list):

    One or more NetCDF filenames/DAP URLs to load from.

Kwargs:

Returns:
Generator of loaded NetCDF iris.cubes.Cube.

↑ top ↑

iris.fileformats.netcdf.parse_cell_methods(nc_cell_methods)

Parse a CF cell_methods attribute string into a tuple of zero or more CellMethod instances.

Args:

  • nc_cell_methods (str):

    The value of the cell methods attribute to be parsed.

Returns:

Multiple coordinates, intervals and comments are supported. If a method has a non-standard name a warning will be issued, but the results are not affected.

↑ top ↑

iris.fileformats.netcdf.save(cube, filename, netcdf_format='NETCDF4', local_keys=None, unlimited_dimensions=None, zlib=False, complevel=4, shuffle=True, fletcher32=False, contiguous=False, chunksizes=None, endian='native', least_significant_digit=None, packing=None)

Save cube(s) to a netCDF file, given the cube and the filename.

  • Iris will write CF 1.5 compliant NetCDF files.
  • The attributes dictionaries on each cube in the saved cube list will be compared and common attributes saved as NetCDF global attributes where appropriate.
  • Keyword arguments specifying how to save the data are applied to each cube. To use different settings for different cubes, use the NetCDF Context manager (Saver) directly.
  • The save process will stream the data payload to the file using biggus, enabling large data payloads to be saved and maintaining the ‘lazy’ status of the cube’s data payload, unless the netcdf_format is explicitly specified to be ‘NETCDF3’ or ‘NETCDF3_CLASSIC’.

Args:

Kwargs:

  • netcdf_format (string):

    Underlying netCDF file format, one of ‘NETCDF4’, ‘NETCDF4_CLASSIC’, ‘NETCDF3_CLASSIC’ or ‘NETCDF3_64BIT’. Default is ‘NETCDF4’ format.

  • local_keys (iterable of strings):

    An interable of cube attribute keys. Any cube attributes with matching keys will become attributes on the data variable rather than global attributes.

  • unlimited_dimensions (iterable of strings and/or iris.coords.Coord objects):

    Explicit list of coordinate names (or coordinate objects) corresponding to coordinate dimensions of cube to save with the NetCDF dimension variable length ‘UNLIMITED’. By default, the outermost (first) dimension for each cube is used. Only the ‘NETCDF4’ format supports multiple ‘UNLIMITED’ dimensions. To save no unlimited dimensions, use unlimited_dimensions=[] (an empty list).

  • zlib (bool):

    If True, the data will be compressed in the netCDF file using gzip compression (default False).

  • complevel (int):

    An integer between 1 and 9 describing the level of compression desired (default 4). Ignored if zlib=False.

  • shuffle (bool):

    If True, the HDF5 shuffle filter will be applied before compressing the data (default True). This significantly improves compression. Ignored if zlib=False.

  • fletcher32 (bool):

    If True, the Fletcher32 HDF5 checksum algorithm is activated to detect errors. Default False.

  • contiguous (bool):

    If True, the variable data is stored contiguously on disk. Default False. Setting to True for a variable with an unlimited dimension will trigger an error.

  • chunksizes (tuple of int):

    Used to manually specify the HDF5 chunksizes for each dimension of the variable. A detailed discussion of HDF chunking and I/O performance is available here: http://www.hdfgroup.org/HDF5/doc/H5.user/Chunking.html. Basically, you want the chunk size for each dimension to match as closely as possible the size of the data block that users will read from the file. chunksizes cannot be set if contiguous=True.

  • endian (string):

    Used to control whether the data is stored in little or big endian format on disk. Possible values are ‘little’, ‘big’ or ‘native’ (default). The library will automatically handle endian conversions when the data is read, but if the data is always going to be read on a computer with the opposite format as the one used to create the file, there may be some performance advantage to be gained by setting the endian-ness.

  • least_significant_digit (int):

    If least_significant_digit is specified, variable data will be truncated (quantized). In conjunction with zlib=True this produces ‘lossy’, but significantly more efficient compression. For example, if least_significant_digit=1, data will be quantized using numpy.around(scale*data)/scale, where scale = 2**bits, and bits is determined so that a precision of 0.1 is retained (in this case bits=4). From http://www.esrl.noaa.gov/psd/data/gridded/conventions/cdc_netcdf_standard.shtml: “least_significant_digit – power of ten of the smallest decimal place in unpacked data that is a reliable value”. Default is None, or no quantization, or ‘lossless’ compression.

  • packing (type or string or dict or list): A numpy integer datatype

    (signed or unsigned) or a string that describes a numpy integer dtype (i.e. ‘i2’, ‘short’, ‘u4’) or a dict of packing parameters as described below or an iterable of such types, strings, or dicts. This provides support for netCDF data packing as described in http://www.unidata.ucar.edu/software/netcdf/docs/BestPractices.html#bp_Packed-Data-Values If this argument is a type (or type string), appropriate values of scale_factor and add_offset will be automatically calculated based on cube.data and possible masking. For masked data, fill_value is taken from netCDF4.default_fillvals. For more control, pass a dict with one or more of the following keys: dtype (required), scale_factor, add_offset, and fill_value. To save multiple cubes with different packing parameters, pass an iterable of types, strings, dicts, or None, one for each cube. Note that automatic calculation of packing parameters will trigger loading of lazy data; set them manually using a dict to avoid this. The default is None, in which case the datatype is determined from the cube and no packing will occur.

Returns:
None.

Note

The zlib, complevel, shuffle, fletcher32, contiguous, chunksizes and endian keywords are silently ignored for netCDF 3 files that do not use HDF5.

See also

NetCDF Context manager (Saver).

Deprecated since version 1.8.0: NetCDF default saving behaviour currently assigns the outermost dimensions to unlimited. This behaviour is to be deprecated, in favour of no automatic assignment. To switch to the new behaviour, set iris.FUTURE.netcdf_no_unlimited to True.

↑ top ↑

Provide a simple CF name to CF coordinate mapping.

class iris.fileformats.netcdf.CFNameCoordMap

Bases: object

Provide a simple CF name to CF coordinate mapping.

append(name, coord)

Append the given name and coordinate pair to the mapping.

Args:

  • name:

    CF name of the associated coordinate.

  • coord:

    The coordinate of the associated CF name.

Returns:
None.
coord(name)

Return the coordinate, given a CF name.

Args:

  • name:

    CF name of the associated coordinate.

Returns:
CF name.
name(coord)

Return the CF name, given a coordinate

Args:

  • coord:

    The coordinate of the associated CF name.

Returns:
Coordinate.
coords

Return all the coordinates.

names

Return all the CF names.

↑ top ↑

A reference to the data payload of a single NetCDF file variable.

class iris.fileformats.netcdf.NetCDFDataProxy(shape, dtype, path, variable_name, fill_value)

Bases: object

A reference to the data payload of a single NetCDF file variable.

dtype
fill_value
ndim
path
shape
variable_name

↑ top ↑

A manager for saving netcdf files.

class iris.fileformats.netcdf.Saver(filename, netcdf_format)

Bases: object

A manager for saving netcdf files.

Args:

  • filename (string):

    Name of the netCDF file to save the cube.

  • netcdf_format (string):

    Underlying netCDF file format, one of ‘NETCDF4’, ‘NETCDF4_CLASSIC’, ‘NETCDF3_CLASSIC’ or ‘NETCDF3_64BIT’. Default is ‘NETCDF4’ format.

Returns:
None.

For example:

# Initialise Manager for saving
with Saver(filename, netcdf_format) as sman:
    # Iterate through the cubelist.
    for cube in cubes:
        sman.write(cube)
static check_attribute_compliance(container, data)
update_global_attributes(attributes=None, **kwargs)

Update the CF global attributes based on the provided iterable/dictionary and/or keyword arguments.

Args:

  • attributes (dict or iterable of key, value pairs):

    CF global attributes to be updated.

write(cube, local_keys=None, unlimited_dimensions=None, zlib=False, complevel=4, shuffle=True, fletcher32=False, contiguous=False, chunksizes=None, endian='native', least_significant_digit=None, packing=None)

Wrapper for saving cubes to a NetCDF file.

Args:

Kwargs:

  • local_keys (iterable of strings):

    An interable of cube attribute keys. Any cube attributes with matching keys will become attributes on the data variable rather than global attributes.

  • unlimited_dimensions (iterable of strings and/or iris.coords.Coord objects):

    Explicit list of coordinate names (or coordinate objects) corresponding to coordinate dimensions of cube to save with the NetCDF dimension variable length ‘UNLIMITED’. By default, the outermost (first) dimension for each cube is used. Only the ‘NETCDF4’ format supports multiple ‘UNLIMITED’ dimensions. To save no unlimited dimensions, use unlimited_dimensions=[] (an empty list).

  • zlib (bool):

    If True, the data will be compressed in the netCDF file using gzip compression (default False).

  • complevel (int):

    An integer between 1 and 9 describing the level of compression desired (default 4). Ignored if zlib=False.

  • shuffle (bool):

    If True, the HDF5 shuffle filter will be applied before compressing the data (default True). This significantly improves compression. Ignored if zlib=False.

  • fletcher32 (bool):

    If True, the Fletcher32 HDF5 checksum algorithm is activated to detect errors. Default False.

  • contiguous (bool):

    If True, the variable data is stored contiguously on disk. Default False. Setting to True for a variable with an unlimited dimension will trigger an error.

  • chunksizes (tuple of int):

    Used to manually specify the HDF5 chunksizes for each dimension of the variable. A detailed discussion of HDF chunking and I/O performance is available here: http://www.hdfgroup.org/HDF5/doc/H5.user/Chunking.html. Basically, you want the chunk size for each dimension to match as closely as possible the size of the data block that users will read from the file. chunksizes cannot be set if contiguous=True.

  • endian (string):

    Used to control whether the data is stored in little or big endian format on disk. Possible values are ‘little’, ‘big’ or ‘native’ (default). The library will automatically handle endian conversions when the data is read, but if the data is always going to be read on a computer with the opposite format as the one used to create the file, there may be some performance advantage to be gained by setting the endian-ness.

  • least_significant_digit (int):

    If least_significant_digit is specified, variable data will be truncated (quantized). In conjunction with zlib=True this produces ‘lossy’, but significantly more efficient compression. For example, if least_significant_digit=1, data will be quantized using numpy.around(scale*data)/scale, where scale = 2**bits, and bits is determined so that a precision of 0.1 is retained (in this case bits=4). From http://www.esrl.noaa.gov/psd/data/gridded/conventions/cdc_netcdf_standard.shtml: “least_significant_digit – power of ten of the smallest decimal place in unpacked data that is a reliable value”. Default is None, or no quantization, or ‘lossless’ compression.

  • packing (type or string or dict or list): A numpy integer datatype

    (signed or unsigned) or a string that describes a numpy integer dtype(i.e. ‘i2’, ‘short’, ‘u4’) or a dict of packing parameters as described below. This provides support for netCDF data packing as described in http://www.unidata.ucar.edu/software/netcdf/docs/BestPractices.html#bp_Packed-Data-Values If this argument is a type (or type string), appropriate values of scale_factor and add_offset will be automatically calculated based on cube.data and possible masking. For masked data, fill_value is taken from netCDF4.default_fillvals. For more control, pass a dict with one or more of the following keys: dtype (required), scale_factor, add_offset, and fill_value. Note that automatic calculation of packing parameters will trigger loading of lazy data; set them manually using a dict to avoid this. The default is None, in which case the datatype is determined from the cube and no packing will occur.

Returns:
None.

Note

The zlib, complevel, shuffle, fletcher32, contiguous, chunksizes and endian keywords are silently ignored for netCDF 3 files that do not use HDF5.

Deprecated since version 1.8.0: NetCDF default saving behaviour currently assigns the outermost dimension as unlimited. This behaviour is to be deprecated, in favour of no automatic assignment. To switch to the new behaviour, set iris.FUTURE.netcdf_no_unlimited to True.

↑ top ↑

None

class iris.fileformats.netcdf.UnknownCellMethodWarning

Bases: exceptions.Warning

args
message