bioio_base package

Submodules

bioio_base.constants module

bioio_base.dimensions module

class bioio_base.dimensions.DimensionNames[source]

Bases: object

Channel = 'C'
MosaicTile = 'M'
Samples = 'S'
SpatialX = 'X'
SpatialY = 'Y'
SpatialZ = 'Z'
Time = 'T'
class bioio_base.dimensions.Dimensions(dims: Collection[str], shape: Tuple[int, ...])[source]

Bases: object

A general object for managing the pairing of dimension name and dimension size.

Parameters:
dims: Collection[str]

An ordered string or collection of the dimensions to pair with their sizes.

shape: Tuple[int, …]

An ordered tuple of the dimensions sizes to pair with their names.

Examples

>>> dims = Dimensions("TCZYX", (1, 4, 75, 624, 924))
... dims.X
... dims['T', 'X']
items() ItemsView[str, int][source]
property order: str
Returns:
order: str

The natural order of the dimensions as a single string.

property shape: Tuple[int, ...]
Returns:
shape: Tuple[int, …]

The dimension sizes in their natural order.

bioio_base.exceptions module

exception bioio_base.exceptions.ConflictingArgumentsError[source]

Bases: Exception

This exception is returned when 2 arguments to the same function are in conflict.

exception bioio_base.exceptions.InvalidDimensionOrderingError[source]

Bases: Exception

A general exception that can be thrown when handling dimension ordering or validation. Should be provided with a message for the user to be given more context.

exception bioio_base.exceptions.UnexpectedShapeError[source]

Bases: Exception

A general exception that can be thrown when handling shape validation. Should be provided with a message for the user to be given more context.

exception bioio_base.exceptions.UnsupportedFileFormatError(reader_name: str, path: str, msg_extra: str | None = None)[source]

Bases: Exception

This exception is intended to communicate that the file extension is not one of the supported file types and cannot be parsed with AICSImage.

bioio_base.image_container module

class bioio_base.image_container.ImageContainer(image: str | Path | ndarray | Array | DataArray | List[ndarray | Array | DataArray] | List[str | Path], reader: Type[ImageContainer] | None = None, reconstruct_mosaic: bool = True, fs_kwargs: Dict[str, Any] = {}, **kwargs: Any)[source]

Bases: ABC

abstract property channel_names: List[str] | None
abstract property current_resolution_level: int
abstract property current_scene: str
abstract property current_scene_index: int
abstract property dask_data: Array
abstract property data: ndarray
abstract property dims: Dimensions
abstract property dtype: dtype
abstract get_image_dask_data(dimension_order_out: str | None = None, **kwargs: Any) Array[source]
abstract get_image_data(dimension_order_out: str | None = None, **kwargs: Any) ndarray[source]
abstract property metadata: Any
abstract property physical_pixel_sizes: PhysicalPixelSizes
abstract property resolution_levels: Tuple[int, ...]
abstract property scenes: Tuple[str, ...]
abstract set_resolution_level(resolution_level: int) None[source]
abstract set_scene(scene_id: str | int) None[source]
abstract property shape: Tuple[int, ...]
abstract property xarray_dask_data: DataArray
abstract property xarray_data: DataArray

bioio_base.io module

bioio_base.io.pathlike_to_fs(uri: str | Path, enforce_exists: bool = False, fs_kwargs: Dict[str, Any] = {}) Tuple[AbstractFileSystem, str][source]

Find and return the appropriate filesystem and path from a path-like object.

Parameters:
uri: PathLike

The local or remote path or uri.

enforce_exists: bool

Check whether or not the resource exists, if not, raise FileNotFoundError.

Returns:
fs: AbstractFileSystem

The filesystem to operate on.

path: str

The full path to the target resource.

fs_kwargs: Dict[str, Any]

Any specific keyword arguments to pass down to the fsspec created filesystem. Default: {}

Raises:
FileNotFoundError

If enforce_exists is provided value True and the resource is not found or is unavailable.

bioio_base.noop_reader module

class bioio_base.noop_reader.NoopReader(image: Any, **kwargs: Any)[source]

Bases: Reader

No-op (no operation) reader intended to be used in tests as a way to test utilities that utilize readers but are not trying to test any specific reader.

NOT intended to be inherited by plug-in readers see ImageContainer instead.

property scenes: Tuple[str, ...]
Returns:
scenes: Tuple[str, …]

A tuple of valid scene ids in the file.

Notes

Scene IDs are strings - not a range of integers.

When iterating over scenes please use:

>>> for id in image.scenes

and not:

>>> for i in range(len(image.scenes))

bioio_base.reader module

class bioio_base.reader.Reader(image: Any, **kwargs: Any)[source]

Bases: ImageContainer, ABC

A small class to build standardized image reader objects that deal with the raw image and metadata.

Parameters:
image: Any

Some type of object to read and follow the Reader specification.

fs_kwargs: Dict[str, Any]

Any specific keyword arguments to pass down to the fsspec created filesystem. Default: {}

Notes

It is up to the implementer of the Reader to decide which types they would like to accept (certain readers may not support buffers for example).

property channel_names: List[str] | None
Returns:
channel_names: List[str]

Using available metadata, the list of strings representing channel names. If no channel dimension present in the data, returns None.

property current_resolution_level: int
Returns:
resolution_level: int

The current resolution level.

property current_scene: str
Returns:
scene: str

The current operating scene.

property current_scene_index: int
Returns:
scene_index: int

The current operating scene index in the file.

property dask_data: Array
Returns:
dask_data: da.Array

The image as a dask array with the native dimension ordering.

property data: ndarray
Returns:
data: np.ndarray

The image as a numpy array with native dimension ordering.

property dims: Dimensions
Returns:
dims: Dimensions

Object with the paired dimension names and their sizes.

property dtype: dtype
Returns:
dtype: np.dtype

Data-type of the image array’s elements.

get_dask_stack(**kwargs: Any) Array[source]

Get all scenes stacked in to a single array.

Returns:
stack: da.Array

The fully stacked array. This can be 6+ dimensions with Scene being the first dimension.

kwargs: Any

Extra keyword arguments that will be passed down to the generate stack function.

See also

aicsimageio.transforms.generate_stack

Underlying function for generating various scene stacks.

get_image_dask_data(dimension_order_out: str | None = None, **kwargs: Any) Array[source]

Get specific dimension image data out of an image as a dask array.

Parameters:
dimension_order_out: Optional[str]

A string containing the dimension ordering desired for the returned ndarray. Default: The natural image dimension order.

kwargs: Any
  • C=1: specifies Channel 1

  • T=3: specifies the fourth index in T

  • D=n: D is Dimension letter and n is the index desired. D should not be present in the dimension_order_out.

  • D=[a, b, c]: D is Dimension letter and a, b, c is the list of indices desired. D should be present in the dimension_order_out.

  • D=(a, b, c): D is Dimension letter and a, b, c is the tuple of indices desired. D should be present in the dimension_order_out.

  • D=range(…): D is Dimension letter and range is the standard Python range function. D should be present in the dimension_order_out.

  • D=slice(…): D is Dimension letter and slice is the standard Python slice function. D should be present in the dimension_order_out.

Returns:
data: da.Array

The image data with the specified dimension ordering.

Notes

If a requested dimension is not present in the data the dimension is added with a depth of 1.

See aicsimageio.transforms.reshape_data for more details.

Examples

Specific index selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... c1 = img.get_image_dask_data("ZYX", C=1)

List of index selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... first_and_second = img.get_image_dask_data("CZYX", C=[0, 1])

Tuple of index selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... first_and_last = img.get_image_dask_data("CZYX", C=(0, -1))

Range of index selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... first_three = img.get_image_dask_data("CZYX", C=range(3))

Slice selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... every_other = img.get_image_dask_data("CZYX", C=slice(0, -1, 2))
get_image_data(dimension_order_out: str | None = None, **kwargs: Any) ndarray[source]

Read the image as a numpy array then return specific dimension image data.

Parameters:
dimension_order_out: Optional[str]

A string containing the dimension ordering desired for the returned ndarray. Default: The natural image dimension order.

kwargs: Any
  • C=1: specifies Channel 1

  • T=3: specifies the fourth index in T

  • D=n: D is Dimension letter and n is the index desired. D should not be present in the dimension_order_out.

  • D=[a, b, c]: D is Dimension letter and a, b, c is the list of indices desired. D should be present in the dimension_order_out.

  • D=(a, b, c): D is Dimension letter and a, b, c is the tuple of indices desired. D should be present in the dimension_order_out.

  • D=range(…): D is Dimension letter and range is the standard Python range function. D should be present in the dimension_order_out.

  • D=slice(…): D is Dimension letter and slice is the standard Python slice function. D should be present in the dimension_order_out.

Returns:
data: np.ndarray

The image data with the specified dimension ordering.

Notes

  • If a requested dimension is not present in the data the dimension is added with a depth of 1.

  • This will preload the entire image before returning the requested data.

See aicsimageio.transforms.reshape_data for more details.

Examples

Specific index selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... c1 = img.get_image_data("ZYX", C=1)

List of index selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... first_and_second = img.get_image_data("CZYX", C=[0, 1])

Tuple of index selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... first_and_last = img.get_image_data("CZYX", C=(0, -1))

Range of index selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... first_three = img.get_image_dask_data("CZYX", C=range(3))

Slice selection

>>> img = Reader("s_1_t_1_c_10_z_20.ome.tiff")
... every_other = img.get_image_data("CZYX", C=slice(0, -1, 2))
get_mosaic_tile_position(mosaic_tile_index: int, **kwargs: int) Tuple[int, int][source]

Get the absolute position of the top left point for a single mosaic tile.

Parameters:
mosaic_tile_index: int

The index for the mosaic tile to retrieve position information for.

kwargs: int

The keywords below allow you to specify the dimensions that you wish to match. If you under-specify the constraints you can easily end up with a massive image stack.

Z = 1 # The Z-dimension. C = 2 # The C-dimension (“channel”). T = 3 # The T-dimension (“time”).

Returns:
top: int

The Y coordinate for the tile position.

left: int

The X coordinate for the tile position.

Raises:
UnexpectedShapeError

The image has no mosaic dimension available.

IndexError

No matching mosaic tile index found.

get_mosaic_tile_positions(**kwargs: int) List[Tuple[int, int]][source]

Get the absolute positions of the top left points for each mosaic tile matching the specified dimensions and current scene.

Parameters:
kwargs: int

The keywords below allow you to specify the dimensions that you wish to match. If you under-specify the constraints you can easily end up with a massive image stack.

Z = 1 # The Z-dimension. C = 2 # The C-dimension (“channel”). T = 3 # The T-dimension (“time”).

Returns:
mosaic_tile_positions: List[Tuple[int, int]]

List of the Y and X coordinate for the tile positions.

Raises:
UnexpectedShapeError

The image has no mosaic dimension available.

get_stack(**kwargs: Any) ndarray[source]

Get all scenes stacked in to a single array.

Returns:
stack: np.ndarray

The fully stacked array. This can be 6+ dimensions with Scene being the first dimension.

kwargs: Any

Extra keyword arguments that will be passed down to the generate stack function.

See also

aicsimageio.transforms.generate_stack

Underlying function for generating various scene stacks.

get_xarray_dask_stack(**kwargs: Any) DataArray[source]

Get all scenes stacked in to a single array.

Returns:
stack: xr.DataArray

The fully stacked array. This can be 6+ dimensions with Scene being the first dimension.

kwargs: Any

Extra keyword arguments that will be passed down to the generate stack function.

See also

aicsimageio.transforms.generate_stack

Underlying function for generating various scene stacks.

Notes

When requesting an xarray stack, the first scene’s coordinate planes are used for the returned xarray DataArray object coordinate planes.

get_xarray_stack(**kwargs: Any) DataArray[source]

Get all scenes stacked in to a single array.

Returns:
stack: xr.DataArray

The fully stacked array. This can be 6+ dimensions with Scene being the first dimension.

kwargs: Any

Extra keyword arguments that will be passed down to the generate stack function.

See also

aicsimageio.transforms.generate_stack

Underlying function for generating various scene stacks.

Notes

When requesting an xarray stack, the first scene’s coordinate planes are used for the returned xarray DataArray object coordinate planes.

classmethod is_supported_image(image: str | Path | ndarray | Array | DataArray | List[ndarray | Array | DataArray] | List[str | Path], fs_kwargs: Dict[str, Any] = {}, **kwargs: Any) bool[source]

Asserts that the provided image like object is supported by the current Reader.

Parameters:
image: types.ImageLike

The filepath or array to validate as a supported type.

fs_kwargs: Dict[str, Any]

Any specific keyword arguments to pass down to the fsspec created filesystem. Default: {}

kwargs: Any

Any kwargs used for reading and validation of the file.

Returns:
supported: bool

Boolean indicated if the provided data is or is not supported by the current Reader.

Raises:
TypeError

Invalid type provided to image parameter.

property metadata: Any
Returns:
metadata: Any

The metadata for the formats supported by the inhereting Reader.

If the inheriting Reader supports processing the metadata into a more useful format / Python object, this will return the result.

For both the unprocessed and processed metadata from the file, use xarray_dask_data.attrs which will contain a dictionary with keys: unprocessed and processed that you can then select.

property mosaic_dask_data: Array
Returns:
dask_data: da.Array

The stitched together mosaic image as a dask array.

Raises:
InvalidDimensionOrderingError

No MosaicTile dimension available to reader.

Notes

Each reader can implement mosaic tile stitching differently but it is common that each tile is a dask array chunk.

property mosaic_data: ndarray
Returns:
data: np.ndarray

The stitched together mosaic image as a numpy array.

Raises:
InvalidDimensionOrderingError

No MosaicTile dimension available to reader.

Notes

Very large images should use mosaic_dask_data to avoid seg-faults.

property mosaic_tile_dims: Dimensions | None
Returns:
tile_dims: Optional[Dimensions]

The dimensions for each tile in the mosaic image. If the image is not a mosaic image, returns None.

property mosaic_xarray_dask_data: DataArray
Returns:
xarray_dask_data: xr.DataArray

The delayed mosaic image and metadata as an annotated data array.

Raises:
InvalidDimensionOrderingError

No MosaicTile dimension available to reader.

Notes

Each reader can implement mosaic tile stitching differently but it is common that each tile is a dask array chunk.

property mosaic_xarray_data: DataArray
Returns:
xarray_dask_data: xr.DataArray

The in-memory mosaic image and metadata as an annotated data array.

Raises:
InvalidDimensionOrderingError

No MosaicTile dimension available to reader.

Notes

Very large images should use mosaic_xarray_dask_data to avoid seg-faults.

property ome_metadata: OME
Returns:
metadata: OME

The original metadata transformed into the OME specfication. This likely isn’t a complete transformation but is guarenteed to be a valid transformation.

Raises:
NotImplementedError

No metadata transformer available.

property physical_pixel_sizes: PhysicalPixelSizes
Returns:
sizes: PhysicalPixelSizes

Using available metadata, the floats representing physical pixel sizes for dimensions Z, Y, and X.

Notes

We currently do not handle unit attachment to these values. Please see the file metadata for unit information.

property resolution_level_dims: Dict[int, Tuple[int, ...]]
Returns:
resolution_level_dims: Dict[int, Tuple[int, …]]

resolution level dictionary of shapes.

property resolution_levels: Tuple[int, ...]
Returns:
resolution_levels: Tuple[str, …]

Return the available resolution levels for the current scene. By default these are ordered from highest resolution to lowest resolution.

abstract property scenes: Tuple[str, ...]
Returns:
scenes: Tuple[str, …]

A tuple of valid scene ids in the file.

Notes

Scene IDs are strings - not a range of integers.

When iterating over scenes please use:

>>> for id in image.scenes

and not:

>>> for i in range(len(image.scenes))
set_resolution_level(resolution_level: int) None[source]

Set the resolution level.

Parameters:
resolution_level: int

The resolution level to access the image at.

Raises:
IndexError

The provided resolution level is not found in the available resolution level list.

set_scene(scene_id: str | int) None[source]

Set the operating scene.

Parameters:
scene_id: Union[str, int]

The scene id (if string) or scene index (if integer) to set as the operating scene.

Raises:
IndexError

The provided scene id or index is not found in the available scene id list.

TypeError

The provided value wasn’t a string (scene id) or integer (scene index).

property shape: Tuple[int, ...]
Returns:
shape: Tuple[int, …]

Tuple of the image array’s dimensions.

property xarray_dask_data: DataArray
Returns:
xarray_dask_data: xr.DataArray

The delayed image and metadata as an annotated data array.

property xarray_data: DataArray
Returns:
xarray_data: xr.DataArray

The fully read image and metadata as an annotated data array.

bioio_base.reader_metadata module

class bioio_base.reader_metadata.ReaderMetadata[source]

Bases: ABC

abstract static get_reader() Reader[source]
abstract static get_supported_extensions() List[str][source]

bioio_base.test_utilities module

bioio_base.test_utilities.check_can_serialize_image_container(image_container: ImageContainer) None[source]
bioio_base.test_utilities.check_local_file_not_open(image_container: ImageContainer) None[source]
bioio_base.test_utilities.run_image_container_checks(image_container: ImageContainer, set_scene: str, expected_scenes: Tuple[str, ...], expected_current_scene: str, expected_shape: Tuple[int, ...], expected_dtype: dtype, expected_dims_order: str, expected_channel_names: List[str] | None, expected_physical_pixel_sizes: Tuple[float | None, float | None, float | None], expected_metadata_type: type | Tuple[type | Tuple[Any, ...], ...], set_resolution_level: int = 0, expected_current_resolution_level: int = 0, expected_resolution_levels: Tuple[int, ...] = (0,)) ImageContainer[source]

A general suite of tests to run against readers.

bioio_base.test_utilities.run_image_file_checks(ImageContainer: Type[ImageContainer], image: str | Path, set_scene: str, expected_scenes: Tuple[str, ...], expected_current_scene: str, expected_shape: Tuple[int, ...], expected_dtype: dtype, expected_dims_order: str, expected_channel_names: List[str] | None, expected_physical_pixel_sizes: Tuple[float | None, float | None, float | None], expected_metadata_type: type | Tuple[type | Tuple[Any, ...], ...], set_resolution_level: int = 0, expected_current_resolution_level: int = 0, expected_resolution_levels: Tuple[int, ...] = (0,)) ImageContainer[source]
bioio_base.test_utilities.run_multi_scene_image_read_checks(ImageContainer: Type[ImageContainer], image: str | Path, first_scene_id: str | int, first_scene_shape: Tuple[int, ...], first_scene_dtype: dtype, second_scene_id: str | int, second_scene_shape: Tuple[int, ...], second_scene_dtype: dtype, allow_same_scene_data: bool = True) ImageContainer[source]

A suite of tests to ensure that data is reset when switching scenes.

bioio_base.test_utilities.run_no_scene_name_image_read_checks(ImageContainer: Type[ImageContainer], image: str | Path, first_scene_id: str | int, first_scene_dtype: dtype, second_scene_id: str | int, second_scene_dtype: dtype, allow_same_scene_data: bool = True) ImageContainer[source]

A suite of tests to check that scene names are auto-filled when not present, and scene switching is reflected in current_scene_index.

bioio_base.test_utilities.run_reader_mosaic_checks(tiles_reader: Reader, stitched_reader: Reader, tiles_set_scene: str, stitched_set_scene: str) None[source]

A general suite of tests to run against readers that can stitch mosaic tiles.

This tests uses in-memory numpy to compare. Test mosaics should be small enough to fit into memory.

bioio_base.transforms module

bioio_base.transforms.generate_stack(image_container: ImageContainer, mode: Literal['data', 'dask_data', 'xarray_data', 'xarray_dask_data'], drop_non_matching_scenes: bool = False, select_scenes: list[str | int] | tuple[str | int, ...] | None = None, scene_character: str = 'I', scene_coord_values: str = 'index') ndarray | Array | DataArray[source]

Stack each scene contained in the reader into a single array. This method handles the logic of determining which stack function to use (dask or numpy) and whether or not to return a labelled array (xr.DataArray). Users should prefer to use one of get_stack, get_dask_stack, get_xarray_stack, or get_xarray_dask_stack.

Parameters:
mode: Literal[“data”, “dask_data”, “xarray_data”, “xarray_dask_data”]

String describing the style of data to return. Should be one of: “data”, “dask_data”, “xarray_data”, “xarray_dask_data”.

drop_non_matching_scenes: bool

During the scene iteration process, if the next scene to be added to the stack has different shape or dtype, should it be dropped or raise an error. Default: False (raise an error)

select_scenes: Optional[

Union[List[Union[str, int]], Tuple[Union[str, int], …]]]

Which scenes to stack into a single array. Scenes can be provided as a list or tuple of scene indices or names. It is recommended to use the scene integer index instead of the scene name to avoid duplicate scene name lookup issues. Default: None (stack all scenes)

scene_character: str

Character to use as the name of the scene dimension on the output array. Default “I”

scene_coord_valuesstr

How to assign coordinates to the scene dimension of the final array. If scene_coord_values=”names” use the scene name from the reader object. If scene_coord_values=”index” don’t attach any coordinates and fall back to integer values. Default: “index”

Returns:
stack: types.MetaArrayLike

The fully stacked array. This can be 6+ dimensions with Scene being the first dimension.

bioio_base.transforms.reduce_to_slice(L: List | Tuple) int | List | slice | Tuple[source]
bioio_base.transforms.reshape_data(data: ndarray | Array, given_dims: str, return_dims: str, **kwargs: Any) ndarray | Array[source]

Reshape the data into return_dims, pad missing dimensions, and prune extra dimensions. Warns the user to use the base reader if the depth of the Dimension being removed is not 1.

Parameters:
data: types.ArrayLike

Either a dask array or numpy.ndarray of arbitrary shape but with the dimensions specified in given_dims

given_dims: str

The dimension ordering of data, “CZYX”, “VBTCXZY” etc

return_dims: str

The dimension ordering of the return data

kwargs:
  • C=1 => desired specific channel, if C in the input data has depth 3 then C=1 returns the 2nd slice (0 indexed)

  • Z=10 => desired specific channel, if Z in the input data has depth 20 then Z=10 returns the 11th slice

  • T=[0, 1] => desired specific timepoints, if T in the input data has depth 100 then T=[0, 1] returns the 1st and 2nd slice (0 indexed)

  • T=(0, 1) => desired specific timepoints, if T in the input data has depth 100 then T=(0, 1) returns the 1st and 2nd slice (0 indexed)

  • T=(0, -1) => desired specific timepoints, if T in the input data has depth 100 then T=(0, -1) returns the first and last slice

  • T=range(10) => desired specific timepoints, if T in the input data has depth 100 then T=range(10) returns the first ten slices

  • T=slice(0, -1, 5) => desired specific timepoints, T=slice(0, -1, 5) returns every fifth timepoint

Returns:
data: types.ArrayLike

The data with the specified dimension ordering.

Raises:
ConflictingArgumentsError

Missing dimension in return dims when using range, slice, or multi-index dimension selection for the requested dimension.

IndexError

Requested dimension index not present in data.

Examples

Specific index selection

>>> data = np.random.rand((10, 100, 100))
... z1 = reshape_data(data, "ZYX", "YX", Z=1)

List of index selection

>>> data = np.random.rand((10, 100, 100))
... first_and_second = reshape_data(data, "ZYX", "YX", Z=[0, 1])

Tuple of index selection

>>> data = np.random.rand((10, 100, 100))
... first_and_last = reshape_data(data, "ZYX", "YX", Z=(0, -1))

Range of index selection

>>> data = np.random.rand((10, 100, 100))
... first_three = reshape_data(data, "ZYX", "YX", Z=range(3))

Slice selection

>>> data = np.random.rand((10, 100, 100))
... every_other = reshape_data(data, "ZYX", "YX", Z=slice(0, -1, 2))

Empty dimension expansion

>>> data = np.random.rand((10, 100, 100))
... with_time = reshape_data(data, "ZYX", "TZYX")

Dimension order shuffle

>>> data = np.random.rand((10, 100, 100))
... as_zx_base = reshape_data(data, "ZYX", "YZX")

Selections, empty dimension expansions, and dimension order shuffle

>>> data = np.random.rand((10, 100, 100))
... example = reshape_data(data, "CYX", "BSTCZYX", C=slice(0, -1, 3))
bioio_base.transforms.transpose_to_dims(data: ndarray | Array, given_dims: str, return_dims: str) ndarray | Array[source]

This shuffles the data dimensions from given_dims to return_dims. Each dimension must be present in given_dims must be used in return_dims

Parameters:
data: types.ArrayLike

Either a dask array or numpy.ndarray of arbitrary shape but with the dimensions specified in given_dims

given_dims: str

The dimension ordering of data, “CZYX”, “VBTCXZY” etc

return_dims: str

The dimension ordering of the return data

Returns:
data: types.ArrayLike

The data with the specified dimension ordering.

Raises:
ConflictingArgumentsError

given_dims and return_dims are incompatible.

bioio_base.types module

class bioio_base.types.PhysicalPixelSizes(Z, Y, X)[source]

Bases: NamedTuple

Create new instance of PhysicalPixelSizes(Z, Y, X)

X: float | None

Alias for field number 2

Y: float | None

Alias for field number 1

Z: float | None

Alias for field number 0

Module contents

Top-level package for bioio_base.