tifffile-2018.11.28/ 0000777 0000000 0000000 00000000000 13400351541 012034 5 ustar 0000000 0000000 tifffile-2018.11.28/LICENSE 0000666 0000000 0000000 00000003163 13400351540 013043 0 ustar 0000000 0000000 Copyright (c) 2008-2018, Christoph Gohlke
Copyright (c) 2008-2018, The Regents of the University of California
Produced at the Laboratory for Fluorescence Dynamics
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
tifffile-2018.11.28/MANIFEST.in 0000666 0000000 0000000 00000000400 13362237644 013602 0 ustar 0000000 0000000 include LICENSE
include README.rst
include tiffile.py
include setup_tiffile.py
include tests/*.py
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
recursive-exclude * *-
recursive-exclude test/data *
recursive-exclude test/_tmp *
tifffile-2018.11.28/PKG-INFO 0000666 0000000 0000000 00000062317 13400351541 013142 0 ustar 0000000 0000000 Metadata-Version: 2.1
Name: tifffile
Version: 2018.11.28
Summary: Read and write TIFF(r) files
Home-page: https://www.lfd.uci.edu/~gohlke/
Author: Christoph Gohlke
Author-email: cgohlke@uci.edu
License: BSD
Description: Read and write TIFF(r) files
============================
Tifffile is a Python library to
(1) store numpy arrays in TIFF (Tagged Image File Format) files, and
(2) read image and metadata from TIFF like files used in bioimaging.
Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, STK, LSM, NIH,
SGI, ImageJ, MicroManager, FluoView, ScanImage, SEQ, GEL, SVS, SCN, SIS, ZIF,
QPI, and GeoTIFF files.
Numpy arrays can be written to TIFF, BigTIFF, and ImageJ hyperstack compatible
files in multi-page, memory-mappable, tiled, predicted, or compressed form.
Only a subset of the TIFF specification is supported, mainly uncompressed and
losslessly compressed 1, 8, 16, 32 and 64-bit integer, 16, 32 and 64-bit float,
grayscale and RGB(A) images.
Specifically, reading slices of image data, CCITT and OJPEG compression,
chroma subsampling without JPEG compression, or IPTC and XMP metadata are not
implemented.
TIFF(r), the Tagged Image File Format, is a trademark and under control of
Adobe Systems Incorporated. BigTIFF allows for files greater than 4 GB.
STK, LSM, FluoView, SGI, SEQ, GEL, and OME-TIFF, are custom extensions
defined by Molecular Devices (Universal Imaging Corporation), Carl Zeiss
MicroImaging, Olympus, Silicon Graphics International, Media Cybernetics,
Molecular Dynamics, and the Open Microscopy Environment consortium
respectively.
For command line usage run ``python -m tifffile --help``
:Author:
`Christoph Gohlke `_
:Organization:
Laboratory for Fluorescence Dynamics, University of California, Irvine
:Version: 2018.11.28
Requirements
------------
* `CPython 2.7 or 3.5+ 64-bit `_
* `Numpy 1.14 `_
* `Imagecodecs 2018.11.8 `_
(optional; used for decoding LZW, JPEG, etc.)
* `Matplotlib 2.2 `_ (optional; used for plotting)
* Python 2.7 requires 'futures', 'enum34', and 'pathlib'.
Revisions
---------
2018.11.28
Pass 2739 tests.
Make SubIFDs accessible as TiffPage.pages.
Make parsing of TiffSequence axes pattern optional (backward incompatible).
Limit parsing of TiffSequence axes pattern to file names, not path names.
Do not interpolate in imshow if image dimensions <= 512, else use bilinear.
Use logging.warning instead of warnings.warn in many cases.
Fix numpy FutureWarning for out == 'memmap'.
Adjust ZSTD and WebP compression to libtiff-4.0.10 (WIP).
Decode old style LZW with imagecodecs >= 2018.11.8.
Remove TiffFile.qptiff_metadata (QPI metadata are per page).
Do not use keyword arguments before variable positional arguments.
Make either all or none return statements in a function return expression.
Use pytest parametrize to generate tests.
Replace test classes with functions.
2018.11.6
Rename imsave function to imwrite.
Readd Python implementations of packints, delta, and bitorder codecs.
Fix TiffFrame.compression AttributeError (bug fix).
2018.10.18
Rename tiffile package to tifffile.
2018.10.10
Pass 2710 tests.
Read ZIF, the Zoomable Image Format (WIP).
Decode YCbCr JPEG as RGB (tentative).
Improve restoration of incomplete tiles.
Allow to write grayscale with extrasamples without specifying planarconfig.
Enable decoding of PNG and JXR via imagecodecs.
Deprecate 32-bit platforms (too many memory errors during tests).
2018.9.27
Read Olympus SIS (WIP).
Allow to write non-BigTIFF files up to ~4 GB (bug fix).
Fix parsing date and time fields in SEM metadata (bug fix).
Detect some circular IFD references.
Enable WebP codecs via imagecodecs.
Add option to read TiffSequence from ZIP containers.
Remove TiffFile.isnative.
Move TIFF struct format constants out of TiffFile namespace.
2018.8.31
Pass 2699 tests.
Fix wrong TiffTag.valueoffset (bug fix).
Towards reading Hamamatsu NDPI (WIP).
Enable PackBits compression of byte and bool arrays.
Fix parsing NULL terminated CZ_SEM strings.
2018.8.24
Move tifffile.py and related modules into tiffile package.
Move usage examples to module docstring.
Enable multi-threading for compressed tiles and pages by default.
Add option to concurrently decode image tiles using threads.
Do not skip empty tiles (bug fix).
Read JPEG and J2K compressed strips and tiles.
Allow floating point predictor on write.
Add option to specify subfiletype on write.
Depend on imagecodecs package instead of _tifffile, lzma, etc modules.
Remove reverse_bitorder, unpack_ints, and decode functions.
Use pytest instead of unittest.
2018.6.20
Save RGBA with unassociated extrasample by default (backward incompatible).
Add option to specify ExtraSamples values.
2018.6.17
Pass 2680 tests.
Towards reading JPEG and other compressions via imagecodecs package (WIP).
Read SampleFormat VOID as UINT.
Add function to validate TIFF using 'jhove -m TIFF-hul'.
Save bool arrays as bilevel TIFF.
Accept pathlib.Path as filenames.
Move 'software' argument from TiffWriter __init__ to save.
Raise DOS limit to 16 TB.
Lazy load lzma and zstd compressors and decompressors.
Add option to save IJMetadata tags.
Return correct number of pages for truncated series (bug fix).
Move EXIF tags to TIFF.TAG as per TIFF/EP standard.
2018.2.18
Pass 2293 tests.
Always save RowsPerStrip and Resolution tags as required by TIFF standard.
Do not use badly typed ImageDescription.
Coherce bad ASCII string tags to bytes.
Tuning of __str__ functions.
Fix reading 'undefined' tag values (bug fix).
Read and write ZSTD compressed data.
Use hexdump to print byte strings.
Determine TIFF byte order from data dtype in imsave.
Add option to specify RowsPerStrip for compressed strips.
Allow memory-map of arrays with non-native byte order.
Attempt to handle ScanImage <= 5.1 files.
Restore TiffPageSeries.pages sequence interface.
Use numpy.frombuffer instead of fromstring to read from binary data.
Parse GeoTIFF metadata.
Add option to apply horizontal differencing before compression.
Towards reading PerkinElmer QPI (QPTIFF, no test files).
Do not index out of bounds data in tifffile.c unpackbits and decodelzw.
2017.9.29 (tentative)
Many backward incompatible changes improving speed and resource usage:
Pass 2268 tests.
Add detail argument to __str__ function. Remove info functions.
Fix potential issue correcting offsets of large LSM files with positions.
Remove TiffFile sequence interface; use TiffFile.pages instead.
Do not make tag values available as TiffPage attributes.
Use str (not bytes) type for tag and metadata strings (WIP).
Use documented standard tag and value names (WIP).
Use enums for some documented TIFF tag values.
Remove 'memmap' and 'tmpfile' options; use out='memmap' instead.
Add option to specify output in asarray functions.
Add option to concurrently decode pages using threads.
Add TiffPage.asrgb function (WIP).
Do not apply colormap in asarray.
Remove 'colormapped', 'rgbonly', and 'scale_mdgel' options from asarray.
Consolidate metadata in TiffFile _metadata functions.
Remove non-tag metadata properties from TiffPage.
Add function to convert LSM to tiled BIN files.
Align image data in file.
Make TiffPage.dtype a numpy.dtype.
Add 'ndim' and 'size' properties to TiffPage and TiffPageSeries.
Allow imsave to write non-BigTIFF files up to ~4 GB.
Only read one page for shaped series if possible.
Add memmap function to create memory-mapped array stored in TIFF file.
Add option to save empty arrays to TIFF files.
Add option to save truncated TIFF files.
Allow single tile images to be saved contiguously.
Add optional movie mode for files with uniform pages.
Lazy load pages.
Use lightweight TiffFrame for IFDs sharing properties with key TiffPage.
Move module constants to 'TIFF' namespace (speed up module import).
Remove 'fastij' option from TiffFile.
Remove 'pages' parameter from TiffFile.
Remove TIFFfile alias.
Deprecate Python 2.
Require enum34 and futures packages on Python 2.7.
Remove Record class and return all metadata as dict instead.
Add functions to parse STK, MetaSeries, ScanImage, SVS, Pilatus metadata.
Read tags from EXIF and GPS IFDs.
Use pformat for tag and metadata values.
Fix reading some UIC tags (bug fix).
Do not modify input array in imshow (bug fix).
Fix Python implementation of unpack_ints.
2017.5.23
Pass 1961 tests.
Write correct number of SampleFormat values (bug fix).
Use Adobe deflate code to write ZIP compressed files.
Add option to pass tag values as packed binary data for writing.
Defer tag validation to attribute access.
Use property instead of lazyattr decorator for simple expressions.
2017.3.17
Write IFDs and tag values on word boundaries.
Read ScanImage metadata.
Remove is_rgb and is_indexed attributes from TiffFile.
Create files used by doctests.
2017.1.12
Read Zeiss SEM metadata.
Read OME-TIFF with invalid references to external files.
Rewrite C LZW decoder (5x faster).
Read corrupted LSM files missing EOI code in LZW stream.
2017.1.1
Add option to append images to existing TIFF files.
Read files without pages.
Read S-FEG and Helios NanoLab tags created by FEI software.
Allow saving Color Filter Array (CFA) images.
Add info functions returning more information about TiffFile and TiffPage.
Add option to read specific pages only.
Remove maxpages argument (backward incompatible).
Remove test_tifffile function.
2016.10.28
Pass 1944 tests.
Improve detection of ImageJ hyperstacks.
Read TVIPS metadata created by EM-MENU (by Marco Oster).
Add option to disable using OME-XML metadata.
Allow non-integer range attributes in modulo tags (by Stuart Berg).
2016.6.21
Do not always memmap contiguous data in page series.
2016.5.13
Add option to specify resolution unit.
Write grayscale images with extra samples when planarconfig is specified.
Do not write RGB color images with 2 samples.
Reorder TiffWriter.save keyword arguments (backward incompatible).
2016.4.18
Pass 1932 tests.
TiffWriter, imread, and imsave accept open binary file streams.
2016.04.13
Correctly handle reversed fill order in 2 and 4 bps images (bug fix).
Implement reverse_bitorder in C.
2016.03.18
Fix saving additional ImageJ metadata.
2016.2.22
Pass 1920 tests.
Write 8 bytes double tag values using offset if necessary (bug fix).
Add option to disable writing second image description tag.
Detect tags with incorrect counts.
Disable color mapping for LSM.
2015.11.13
Read LSM 6 mosaics.
Add option to specify directory of memory-mapped files.
Add command line options to specify vmin and vmax values for colormapping.
2015.10.06
New helper function to apply colormaps.
Renamed is_palette attributes to is_indexed (backward incompatible).
Color-mapped samples are now contiguous (backward incompatible).
Do not color-map ImageJ hyperstacks (backward incompatible).
Towards reading Leica SCN.
2015.9.25
Read images with reversed bit order (FillOrder is LSB2MSB).
2015.9.21
Read RGB OME-TIFF.
Warn about malformed OME-XML.
2015.9.16
Detect some corrupted ImageJ metadata.
Better axes labels for 'shaped' files.
Do not create TiffTag for default values.
Chroma subsampling is not supported.
Memory-map data in TiffPageSeries if possible (optional).
2015.8.17
Pass 1906 tests.
Write ImageJ hyperstacks (optional).
Read and write LZMA compressed data.
Specify datetime when saving (optional).
Save tiled and color-mapped images (optional).
Ignore void bytecounts and offsets if possible.
Ignore bogus image_depth tag created by ISS Vista software.
Decode floating point horizontal differencing (not tiled).
Save image data contiguously if possible.
Only read first IFD from ImageJ files if possible.
Read ImageJ 'raw' format (files larger than 4 GB).
TiffPageSeries class for pages with compatible shape and data type.
Try to read incomplete tiles.
Open file dialog if no filename is passed on command line.
Ignore errors when decoding OME-XML.
Rename decoder functions (backward incompatible).
2014.8.24
TiffWriter class for incremental writing images.
Simplify examples.
2014.8.19
Add memmap function to FileHandle.
Add function to determine if image data in TiffPage is memory-mappable.
Do not close files if multifile_close parameter is False.
2014.8.10
Pass 1730 tests.
Return all extrasamples by default (backward incompatible).
Read data from series of pages into memory-mapped array (optional).
Squeeze OME dimensions (backward incompatible).
Workaround missing EOI code in strips.
Support image and tile depth tags (SGI extension).
Better handling of STK/UIC tags (backward incompatible).
Disable color mapping for STK.
Julian to datetime converter.
TIFF ASCII type may be NULL separated.
Unwrap strip offsets for LSM files greater than 4 GB.
Correct strip byte counts in compressed LSM files.
Skip missing files in OME series.
Read embedded TIFF files.
2014.2.05
Save rational numbers as type 5 (bug fix).
2013.12.20
Keep other files in OME multi-file series closed.
FileHandle class to abstract binary file handle.
Disable color mapping for bad OME-TIFF produced by bio-formats.
Read bad OME-XML produced by ImageJ when cropping.
2013.11.3
Allow zlib compress data in imsave function (optional).
Memory-map contiguous image data (optional).
2013.10.28
Read MicroManager metadata and little-endian ImageJ tag.
Save extra tags in imsave function.
Save tags in ascending order by code (bug fix).
2012.10.18
Accept file like objects (read from OIB files).
2012.8.21
Rename TIFFfile to TiffFile and TIFFpage to TiffPage.
TiffSequence class for reading sequence of TIFF files.
Read UltraQuant tags.
Allow float numbers as resolution in imsave function.
2012.8.3
Read MD GEL tags and NIH Image header.
2012.7.25
Read ImageJ tags.
...
Notes
-----
The API is not stable yet and might change between revisions.
Tested on little-endian platforms only.
Python 2.7, 3.4, and 32-bit versions are deprecated.
Other libraries for reading scientific TIFF files from Python:
* `Python-bioformats `_
* `Imread `_
* `GDAL `_
* `OpenSlide-python `_
* `PyLibTiff `_
* `SimpleITK `_
* `PyLSM `_
* `PyMca.TiffIO.py `_ (same as fabio.TiffIO)
* `BioImageXD.Readers `_
* `Cellcognition.io `_
* `pymimage `_
* `pytiff `_
Acknowledgements
----------------
* Egor Zindy, University of Manchester, for lsm_scan_info specifics.
* Wim Lewis for a bug fix and some LSM functions.
* Hadrien Mary for help on reading MicroManager files.
* Christian Kliche for help writing tiled and color-mapped files.
References
----------
1) TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated.
https://www.adobe.io/open/standards/TIFF.html
2) TIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html
3) MetaMorph Stack (STK) Image File Format.
http://mdc.custhelp.com/app/answers/detail/a_id/18862
4) Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010).
Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011
5) The OME-TIFF format.
https://docs.openmicroscopy.org/ome-model/5.6.4/ome-tiff/
6) UltraQuant(r) Version 6.0 for Windows Start-Up Guide.
http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf
7) Micro-Manager File Formats.
https://micro-manager.org/wiki/Micro-Manager_File_Formats
8) Tags for TIFF and Related Specifications. Digital Preservation.
https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml
9) ScanImage BigTiff Specification - ScanImage 2016.
http://scanimage.vidriotechnologies.com/display/SI2016/
ScanImage+BigTiff+Specification
10) CIPA DC-008-2016: Exchangeable image file format for digital still cameras:
Exif Version 2.31.
http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf
11) ZIF, the Zoomable Image File format. http://zif.photo/
Examples
--------
Save a 3D numpy array to a multi-page, 16-bit grayscale TIFF file:
>>> data = numpy.random.randint(0, 2**16, (4, 301, 219), 'uint16')
>>> imwrite('temp.tif', data, photometric='minisblack')
Read the whole image stack from the TIFF file as numpy array:
>>> image_stack = imread('temp.tif')
>>> image_stack.shape
(4, 301, 219)
>>> image_stack.dtype
dtype('uint16')
Read the image from first page (IFD) in the TIFF file:
>>> image = imread('temp.tif', key=0)
>>> image.shape
(301, 219)
Read images from a sequence of TIFF files as numpy array:
>>> image_sequence = imread(['temp.tif', 'temp.tif'])
>>> image_sequence.shape
(2, 4, 301, 219)
Save a numpy array to a single-page RGB TIFF file:
>>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8')
>>> imwrite('temp.tif', data, photometric='rgb')
Save a floating-point array and metadata, using zlib compression:
>>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32')
>>> imwrite('temp.tif', data, compress=6, metadata={'axes': 'TZCYX'})
Save a volume with xyz voxel size 2.6755x2.6755x3.9474 µm^3 to ImageJ file:
>>> volume = numpy.random.randn(57*256*256).astype('float32')
>>> volume.shape = 1, 57, 1, 256, 256, 1 # dimensions in TZCYXS order
>>> imwrite('temp.tif', volume, imagej=True, resolution=(1./2.6755, 1./2.6755),
... metadata={'spacing': 3.947368, 'unit': 'um'})
Read hyperstack and metadata from ImageJ file:
>>> with TiffFile('temp.tif') as tif:
... imagej_hyperstack = tif.asarray()
... imagej_metadata = tif.imagej_metadata
>>> imagej_hyperstack.shape
(57, 256, 256)
>>> imagej_metadata['slices']
57
Create an empty TIFF file and write to the memory-mapped numpy array:
>>> memmap_image = memmap('temp.tif', shape=(256, 256), dtype='float32')
>>> memmap_image[255, 255] = 1.0
>>> memmap_image.flush()
>>> memmap_image.shape, memmap_image.dtype
((256, 256), dtype('float32'))
>>> del memmap_image
Memory-map image data in the TIFF file:
>>> memmap_image = memmap('temp.tif', page=0)
>>> memmap_image[255, 255]
1.0
>>> del memmap_image
Successively append images to a BigTIFF file:
>>> data = numpy.random.randint(0, 255, (5, 2, 3, 301, 219), 'uint8')
>>> with TiffWriter('temp.tif', bigtiff=True) as tif:
... for i in range(data.shape[0]):
... tif.save(data[i], compress=6, photometric='minisblack')
Iterate over pages and tags in the TIFF file and successively read images:
>>> with TiffFile('temp.tif') as tif:
... image_stack = tif.asarray()
... for page in tif.pages:
... for tag in page.tags.values():
... tag_name, tag_value = tag.name, tag.value
... image = page.asarray()
Save two image series to a TIFF file:
>>> data0 = numpy.random.randint(0, 255, (301, 219, 3), 'uint8')
>>> data1 = numpy.random.randint(0, 255, (5, 301, 219), 'uint16')
>>> with TiffWriter('temp.tif') as tif:
... tif.save(data0, compress=6, photometric='rgb')
... tif.save(data1, compress=6, photometric='minisblack')
Read the second image series from the TIFF file:
>>> series1 = imread('temp.tif', series=1)
>>> series1.shape
(5, 301, 219)
Read an image stack from a sequence of TIFF files with a file name pattern:
>>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64))
>>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64))
>>> image_sequence = TiffSequence('temp_C001*.tif', pattern='axes')
>>> image_sequence.shape
(1, 2)
>>> image_sequence.axes
'CT'
>>> data = image_sequence.asarray()
>>> data.shape
(1, 2, 64, 64)
Platform: any
Classifier: Development Status :: 4 - Beta
Classifier: License :: OSI Approved :: BSD License
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Requires-Python: >=2.7
Provides-Extra: all
tifffile-2018.11.28/README.rst 0000666 0000000 0000000 00000050762 13400351540 013534 0 ustar 0000000 0000000 Read and write TIFF(r) files
============================
Tifffile is a Python library to
(1) store numpy arrays in TIFF (Tagged Image File Format) files, and
(2) read image and metadata from TIFF like files used in bioimaging.
Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, STK, LSM, NIH,
SGI, ImageJ, MicroManager, FluoView, ScanImage, SEQ, GEL, SVS, SCN, SIS, ZIF,
QPI, and GeoTIFF files.
Numpy arrays can be written to TIFF, BigTIFF, and ImageJ hyperstack compatible
files in multi-page, memory-mappable, tiled, predicted, or compressed form.
Only a subset of the TIFF specification is supported, mainly uncompressed and
losslessly compressed 1, 8, 16, 32 and 64-bit integer, 16, 32 and 64-bit float,
grayscale and RGB(A) images.
Specifically, reading slices of image data, CCITT and OJPEG compression,
chroma subsampling without JPEG compression, or IPTC and XMP metadata are not
implemented.
TIFF(r), the Tagged Image File Format, is a trademark and under control of
Adobe Systems Incorporated. BigTIFF allows for files greater than 4 GB.
STK, LSM, FluoView, SGI, SEQ, GEL, and OME-TIFF, are custom extensions
defined by Molecular Devices (Universal Imaging Corporation), Carl Zeiss
MicroImaging, Olympus, Silicon Graphics International, Media Cybernetics,
Molecular Dynamics, and the Open Microscopy Environment consortium
respectively.
For command line usage run ``python -m tifffile --help``
:Author:
`Christoph Gohlke `_
:Organization:
Laboratory for Fluorescence Dynamics, University of California, Irvine
:Version: 2018.11.28
Requirements
------------
* `CPython 2.7 or 3.5+ 64-bit `_
* `Numpy 1.14 `_
* `Imagecodecs 2018.11.8 `_
(optional; used for decoding LZW, JPEG, etc.)
* `Matplotlib 2.2 `_ (optional; used for plotting)
* Python 2.7 requires 'futures', 'enum34', and 'pathlib'.
Revisions
---------
2018.11.28
Pass 2739 tests.
Make SubIFDs accessible as TiffPage.pages.
Make parsing of TiffSequence axes pattern optional (backward incompatible).
Limit parsing of TiffSequence axes pattern to file names, not path names.
Do not interpolate in imshow if image dimensions <= 512, else use bilinear.
Use logging.warning instead of warnings.warn in many cases.
Fix numpy FutureWarning for out == 'memmap'.
Adjust ZSTD and WebP compression to libtiff-4.0.10 (WIP).
Decode old style LZW with imagecodecs >= 2018.11.8.
Remove TiffFile.qptiff_metadata (QPI metadata are per page).
Do not use keyword arguments before variable positional arguments.
Make either all or none return statements in a function return expression.
Use pytest parametrize to generate tests.
Replace test classes with functions.
2018.11.6
Rename imsave function to imwrite.
Readd Python implementations of packints, delta, and bitorder codecs.
Fix TiffFrame.compression AttributeError (bug fix).
2018.10.18
Rename tiffile package to tifffile.
2018.10.10
Pass 2710 tests.
Read ZIF, the Zoomable Image Format (WIP).
Decode YCbCr JPEG as RGB (tentative).
Improve restoration of incomplete tiles.
Allow to write grayscale with extrasamples without specifying planarconfig.
Enable decoding of PNG and JXR via imagecodecs.
Deprecate 32-bit platforms (too many memory errors during tests).
2018.9.27
Read Olympus SIS (WIP).
Allow to write non-BigTIFF files up to ~4 GB (bug fix).
Fix parsing date and time fields in SEM metadata (bug fix).
Detect some circular IFD references.
Enable WebP codecs via imagecodecs.
Add option to read TiffSequence from ZIP containers.
Remove TiffFile.isnative.
Move TIFF struct format constants out of TiffFile namespace.
2018.8.31
Pass 2699 tests.
Fix wrong TiffTag.valueoffset (bug fix).
Towards reading Hamamatsu NDPI (WIP).
Enable PackBits compression of byte and bool arrays.
Fix parsing NULL terminated CZ_SEM strings.
2018.8.24
Move tifffile.py and related modules into tiffile package.
Move usage examples to module docstring.
Enable multi-threading for compressed tiles and pages by default.
Add option to concurrently decode image tiles using threads.
Do not skip empty tiles (bug fix).
Read JPEG and J2K compressed strips and tiles.
Allow floating point predictor on write.
Add option to specify subfiletype on write.
Depend on imagecodecs package instead of _tifffile, lzma, etc modules.
Remove reverse_bitorder, unpack_ints, and decode functions.
Use pytest instead of unittest.
2018.6.20
Save RGBA with unassociated extrasample by default (backward incompatible).
Add option to specify ExtraSamples values.
2018.6.17
Pass 2680 tests.
Towards reading JPEG and other compressions via imagecodecs package (WIP).
Read SampleFormat VOID as UINT.
Add function to validate TIFF using 'jhove -m TIFF-hul'.
Save bool arrays as bilevel TIFF.
Accept pathlib.Path as filenames.
Move 'software' argument from TiffWriter __init__ to save.
Raise DOS limit to 16 TB.
Lazy load lzma and zstd compressors and decompressors.
Add option to save IJMetadata tags.
Return correct number of pages for truncated series (bug fix).
Move EXIF tags to TIFF.TAG as per TIFF/EP standard.
2018.2.18
Pass 2293 tests.
Always save RowsPerStrip and Resolution tags as required by TIFF standard.
Do not use badly typed ImageDescription.
Coherce bad ASCII string tags to bytes.
Tuning of __str__ functions.
Fix reading 'undefined' tag values (bug fix).
Read and write ZSTD compressed data.
Use hexdump to print byte strings.
Determine TIFF byte order from data dtype in imsave.
Add option to specify RowsPerStrip for compressed strips.
Allow memory-map of arrays with non-native byte order.
Attempt to handle ScanImage <= 5.1 files.
Restore TiffPageSeries.pages sequence interface.
Use numpy.frombuffer instead of fromstring to read from binary data.
Parse GeoTIFF metadata.
Add option to apply horizontal differencing before compression.
Towards reading PerkinElmer QPI (QPTIFF, no test files).
Do not index out of bounds data in tifffile.c unpackbits and decodelzw.
2017.9.29 (tentative)
Many backward incompatible changes improving speed and resource usage:
Pass 2268 tests.
Add detail argument to __str__ function. Remove info functions.
Fix potential issue correcting offsets of large LSM files with positions.
Remove TiffFile sequence interface; use TiffFile.pages instead.
Do not make tag values available as TiffPage attributes.
Use str (not bytes) type for tag and metadata strings (WIP).
Use documented standard tag and value names (WIP).
Use enums for some documented TIFF tag values.
Remove 'memmap' and 'tmpfile' options; use out='memmap' instead.
Add option to specify output in asarray functions.
Add option to concurrently decode pages using threads.
Add TiffPage.asrgb function (WIP).
Do not apply colormap in asarray.
Remove 'colormapped', 'rgbonly', and 'scale_mdgel' options from asarray.
Consolidate metadata in TiffFile _metadata functions.
Remove non-tag metadata properties from TiffPage.
Add function to convert LSM to tiled BIN files.
Align image data in file.
Make TiffPage.dtype a numpy.dtype.
Add 'ndim' and 'size' properties to TiffPage and TiffPageSeries.
Allow imsave to write non-BigTIFF files up to ~4 GB.
Only read one page for shaped series if possible.
Add memmap function to create memory-mapped array stored in TIFF file.
Add option to save empty arrays to TIFF files.
Add option to save truncated TIFF files.
Allow single tile images to be saved contiguously.
Add optional movie mode for files with uniform pages.
Lazy load pages.
Use lightweight TiffFrame for IFDs sharing properties with key TiffPage.
Move module constants to 'TIFF' namespace (speed up module import).
Remove 'fastij' option from TiffFile.
Remove 'pages' parameter from TiffFile.
Remove TIFFfile alias.
Deprecate Python 2.
Require enum34 and futures packages on Python 2.7.
Remove Record class and return all metadata as dict instead.
Add functions to parse STK, MetaSeries, ScanImage, SVS, Pilatus metadata.
Read tags from EXIF and GPS IFDs.
Use pformat for tag and metadata values.
Fix reading some UIC tags (bug fix).
Do not modify input array in imshow (bug fix).
Fix Python implementation of unpack_ints.
2017.5.23
Pass 1961 tests.
Write correct number of SampleFormat values (bug fix).
Use Adobe deflate code to write ZIP compressed files.
Add option to pass tag values as packed binary data for writing.
Defer tag validation to attribute access.
Use property instead of lazyattr decorator for simple expressions.
2017.3.17
Write IFDs and tag values on word boundaries.
Read ScanImage metadata.
Remove is_rgb and is_indexed attributes from TiffFile.
Create files used by doctests.
2017.1.12
Read Zeiss SEM metadata.
Read OME-TIFF with invalid references to external files.
Rewrite C LZW decoder (5x faster).
Read corrupted LSM files missing EOI code in LZW stream.
2017.1.1
Add option to append images to existing TIFF files.
Read files without pages.
Read S-FEG and Helios NanoLab tags created by FEI software.
Allow saving Color Filter Array (CFA) images.
Add info functions returning more information about TiffFile and TiffPage.
Add option to read specific pages only.
Remove maxpages argument (backward incompatible).
Remove test_tifffile function.
2016.10.28
Pass 1944 tests.
Improve detection of ImageJ hyperstacks.
Read TVIPS metadata created by EM-MENU (by Marco Oster).
Add option to disable using OME-XML metadata.
Allow non-integer range attributes in modulo tags (by Stuart Berg).
2016.6.21
Do not always memmap contiguous data in page series.
2016.5.13
Add option to specify resolution unit.
Write grayscale images with extra samples when planarconfig is specified.
Do not write RGB color images with 2 samples.
Reorder TiffWriter.save keyword arguments (backward incompatible).
2016.4.18
Pass 1932 tests.
TiffWriter, imread, and imsave accept open binary file streams.
2016.04.13
Correctly handle reversed fill order in 2 and 4 bps images (bug fix).
Implement reverse_bitorder in C.
2016.03.18
Fix saving additional ImageJ metadata.
2016.2.22
Pass 1920 tests.
Write 8 bytes double tag values using offset if necessary (bug fix).
Add option to disable writing second image description tag.
Detect tags with incorrect counts.
Disable color mapping for LSM.
2015.11.13
Read LSM 6 mosaics.
Add option to specify directory of memory-mapped files.
Add command line options to specify vmin and vmax values for colormapping.
2015.10.06
New helper function to apply colormaps.
Renamed is_palette attributes to is_indexed (backward incompatible).
Color-mapped samples are now contiguous (backward incompatible).
Do not color-map ImageJ hyperstacks (backward incompatible).
Towards reading Leica SCN.
2015.9.25
Read images with reversed bit order (FillOrder is LSB2MSB).
2015.9.21
Read RGB OME-TIFF.
Warn about malformed OME-XML.
2015.9.16
Detect some corrupted ImageJ metadata.
Better axes labels for 'shaped' files.
Do not create TiffTag for default values.
Chroma subsampling is not supported.
Memory-map data in TiffPageSeries if possible (optional).
2015.8.17
Pass 1906 tests.
Write ImageJ hyperstacks (optional).
Read and write LZMA compressed data.
Specify datetime when saving (optional).
Save tiled and color-mapped images (optional).
Ignore void bytecounts and offsets if possible.
Ignore bogus image_depth tag created by ISS Vista software.
Decode floating point horizontal differencing (not tiled).
Save image data contiguously if possible.
Only read first IFD from ImageJ files if possible.
Read ImageJ 'raw' format (files larger than 4 GB).
TiffPageSeries class for pages with compatible shape and data type.
Try to read incomplete tiles.
Open file dialog if no filename is passed on command line.
Ignore errors when decoding OME-XML.
Rename decoder functions (backward incompatible).
2014.8.24
TiffWriter class for incremental writing images.
Simplify examples.
2014.8.19
Add memmap function to FileHandle.
Add function to determine if image data in TiffPage is memory-mappable.
Do not close files if multifile_close parameter is False.
2014.8.10
Pass 1730 tests.
Return all extrasamples by default (backward incompatible).
Read data from series of pages into memory-mapped array (optional).
Squeeze OME dimensions (backward incompatible).
Workaround missing EOI code in strips.
Support image and tile depth tags (SGI extension).
Better handling of STK/UIC tags (backward incompatible).
Disable color mapping for STK.
Julian to datetime converter.
TIFF ASCII type may be NULL separated.
Unwrap strip offsets for LSM files greater than 4 GB.
Correct strip byte counts in compressed LSM files.
Skip missing files in OME series.
Read embedded TIFF files.
2014.2.05
Save rational numbers as type 5 (bug fix).
2013.12.20
Keep other files in OME multi-file series closed.
FileHandle class to abstract binary file handle.
Disable color mapping for bad OME-TIFF produced by bio-formats.
Read bad OME-XML produced by ImageJ when cropping.
2013.11.3
Allow zlib compress data in imsave function (optional).
Memory-map contiguous image data (optional).
2013.10.28
Read MicroManager metadata and little-endian ImageJ tag.
Save extra tags in imsave function.
Save tags in ascending order by code (bug fix).
2012.10.18
Accept file like objects (read from OIB files).
2012.8.21
Rename TIFFfile to TiffFile and TIFFpage to TiffPage.
TiffSequence class for reading sequence of TIFF files.
Read UltraQuant tags.
Allow float numbers as resolution in imsave function.
2012.8.3
Read MD GEL tags and NIH Image header.
2012.7.25
Read ImageJ tags.
...
Notes
-----
The API is not stable yet and might change between revisions.
Tested on little-endian platforms only.
Python 2.7, 3.4, and 32-bit versions are deprecated.
Other libraries for reading scientific TIFF files from Python:
* `Python-bioformats `_
* `Imread `_
* `GDAL `_
* `OpenSlide-python `_
* `PyLibTiff `_
* `SimpleITK `_
* `PyLSM `_
* `PyMca.TiffIO.py `_ (same as fabio.TiffIO)
* `BioImageXD.Readers `_
* `Cellcognition.io `_
* `pymimage `_
* `pytiff `_
Acknowledgements
----------------
* Egor Zindy, University of Manchester, for lsm_scan_info specifics.
* Wim Lewis for a bug fix and some LSM functions.
* Hadrien Mary for help on reading MicroManager files.
* Christian Kliche for help writing tiled and color-mapped files.
References
----------
1) TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated.
https://www.adobe.io/open/standards/TIFF.html
2) TIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html
3) MetaMorph Stack (STK) Image File Format.
http://mdc.custhelp.com/app/answers/detail/a_id/18862
4) Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010).
Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011
5) The OME-TIFF format.
https://docs.openmicroscopy.org/ome-model/5.6.4/ome-tiff/
6) UltraQuant(r) Version 6.0 for Windows Start-Up Guide.
http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf
7) Micro-Manager File Formats.
https://micro-manager.org/wiki/Micro-Manager_File_Formats
8) Tags for TIFF and Related Specifications. Digital Preservation.
https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml
9) ScanImage BigTiff Specification - ScanImage 2016.
http://scanimage.vidriotechnologies.com/display/SI2016/
ScanImage+BigTiff+Specification
10) CIPA DC-008-2016: Exchangeable image file format for digital still cameras:
Exif Version 2.31.
http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf
11) ZIF, the Zoomable Image File format. http://zif.photo/
Examples
--------
Save a 3D numpy array to a multi-page, 16-bit grayscale TIFF file:
>>> data = numpy.random.randint(0, 2**16, (4, 301, 219), 'uint16')
>>> imwrite('temp.tif', data, photometric='minisblack')
Read the whole image stack from the TIFF file as numpy array:
>>> image_stack = imread('temp.tif')
>>> image_stack.shape
(4, 301, 219)
>>> image_stack.dtype
dtype('uint16')
Read the image from first page (IFD) in the TIFF file:
>>> image = imread('temp.tif', key=0)
>>> image.shape
(301, 219)
Read images from a sequence of TIFF files as numpy array:
>>> image_sequence = imread(['temp.tif', 'temp.tif'])
>>> image_sequence.shape
(2, 4, 301, 219)
Save a numpy array to a single-page RGB TIFF file:
>>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8')
>>> imwrite('temp.tif', data, photometric='rgb')
Save a floating-point array and metadata, using zlib compression:
>>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32')
>>> imwrite('temp.tif', data, compress=6, metadata={'axes': 'TZCYX'})
Save a volume with xyz voxel size 2.6755x2.6755x3.9474 µm^3 to ImageJ file:
>>> volume = numpy.random.randn(57*256*256).astype('float32')
>>> volume.shape = 1, 57, 1, 256, 256, 1 # dimensions in TZCYXS order
>>> imwrite('temp.tif', volume, imagej=True, resolution=(1./2.6755, 1./2.6755),
... metadata={'spacing': 3.947368, 'unit': 'um'})
Read hyperstack and metadata from ImageJ file:
>>> with TiffFile('temp.tif') as tif:
... imagej_hyperstack = tif.asarray()
... imagej_metadata = tif.imagej_metadata
>>> imagej_hyperstack.shape
(57, 256, 256)
>>> imagej_metadata['slices']
57
Create an empty TIFF file and write to the memory-mapped numpy array:
>>> memmap_image = memmap('temp.tif', shape=(256, 256), dtype='float32')
>>> memmap_image[255, 255] = 1.0
>>> memmap_image.flush()
>>> memmap_image.shape, memmap_image.dtype
((256, 256), dtype('float32'))
>>> del memmap_image
Memory-map image data in the TIFF file:
>>> memmap_image = memmap('temp.tif', page=0)
>>> memmap_image[255, 255]
1.0
>>> del memmap_image
Successively append images to a BigTIFF file:
>>> data = numpy.random.randint(0, 255, (5, 2, 3, 301, 219), 'uint8')
>>> with TiffWriter('temp.tif', bigtiff=True) as tif:
... for i in range(data.shape[0]):
... tif.save(data[i], compress=6, photometric='minisblack')
Iterate over pages and tags in the TIFF file and successively read images:
>>> with TiffFile('temp.tif') as tif:
... image_stack = tif.asarray()
... for page in tif.pages:
... for tag in page.tags.values():
... tag_name, tag_value = tag.name, tag.value
... image = page.asarray()
Save two image series to a TIFF file:
>>> data0 = numpy.random.randint(0, 255, (301, 219, 3), 'uint8')
>>> data1 = numpy.random.randint(0, 255, (5, 301, 219), 'uint16')
>>> with TiffWriter('temp.tif') as tif:
... tif.save(data0, compress=6, photometric='rgb')
... tif.save(data1, compress=6, photometric='minisblack')
Read the second image series from the TIFF file:
>>> series1 = imread('temp.tif', series=1)
>>> series1.shape
(5, 301, 219)
Read an image stack from a sequence of TIFF files with a file name pattern:
>>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64))
>>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64))
>>> image_sequence = TiffSequence('temp_C001*.tif', pattern='axes')
>>> image_sequence.shape
(1, 2)
>>> image_sequence.axes
'CT'
>>> data = image_sequence.asarray()
>>> data.shape
(1, 2, 64, 64)
tifffile-2018.11.28/setup.cfg 0000666 0000000 0000000 00000000052 13400351541 013652 0 ustar 0000000 0000000 [egg_info]
tag_build =
tag_date = 0
tifffile-2018.11.28/setup.py 0000666 0000000 0000000 00000004774 13371116774 013600 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# tifffile/setup.py
"""Tifffile package setuptools script."""
import sys
import re
from setuptools import setup
buildnumber = ''
imagecodecs = 'imagecodecs>=2018.11.8'
with open('tifffile/tifffile.py') as fh:
code = fh.read()
version = re.search(r"__version__ = '(.*?)'", code).groups()[0]
version += ('.' + buildnumber) if buildnumber else ''
description = re.search(r'"""(.*)\.[\r\n?|\n]', code).groups()[0]
readme = re.search(r'[\r\n?|\n]{2}"""(.*)"""[\r\n?|\n]{2}from', code,
re.MULTILINE | re.DOTALL).groups()[0]
license = re.search(r'(# Copyright.*?[\r\n?|\n])[\r\n?|\n]+""', code,
re.MULTILINE | re.DOTALL).groups()[0]
readme = '\n'.join([description, '=' * len(description)]
+ readme.splitlines()[1:])
license = license.replace('# ', '').replace('#', '')
if 'sdist' in sys.argv:
with open('LICENSE', 'w') as fh:
fh.write(license)
with open('README.rst', 'w') as fh:
fh.write(readme)
setup(
name='tifffile',
version=version,
description=description,
long_description=readme,
author='Christoph Gohlke',
author_email='cgohlke@uci.edu',
url='https://www.lfd.uci.edu/~gohlke/',
license='BSD',
packages=['tifffile'],
python_requires='>=2.7',
install_requires=[
'numpy>=1.11.3',
'pathlib;python_version=="2.7"',
'enum34;python_version=="2.7"',
'futures;python_version=="2.7"',
# require imagecodecs on Windows only
imagecodecs + ';platform_system=="Windows"',
],
extras_require={
'all': ['matplotlib>=2.2', imagecodecs],
},
tests_require=['pytest', imagecodecs],
entry_points={
'console_scripts': [
'tifffile = tifffile:main',
'lsm2bin = tifffile.lsm2bin:main'
]},
platforms=['any'],
classifiers=[
'Development Status :: 4 - Beta',
'License :: OSI Approved :: BSD License',
'Intended Audience :: Science/Research',
'Intended Audience :: Developers',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
)
tifffile-2018.11.28/setup_tiffile.py 0000666 0000000 0000000 00000002371 13362436514 015266 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# setup_tiffile.py
"""Tiffile module setuptools script."""
import re
from setuptools import setup
with open('tifffile/tifffile.py') as fh:
code = fh.read()
version = re.search(r"__version__ = '(.*?)'", code).groups()[0]
setup(
name='tiffile',
version=version,
description='The tiffile package is deprecated. '
'Please use the tifffile package instead.',
author='Christoph Gohlke',
author_email='cgohlke@uci.edu',
url='https://www.lfd.uci.edu/~gohlke/',
license='BSD',
py_modules=['tiffile'],
install_requires=['tifffile'],
platforms=['any'],
classifiers=[
'Development Status :: 4 - Beta',
'License :: OSI Approved :: BSD License',
'Intended Audience :: Science/Research',
'Intended Audience :: Developers',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
)
tifffile-2018.11.28/tests/ 0000777 0000000 0000000 00000000000 13400351541 013176 5 ustar 0000000 0000000 tifffile-2018.11.28/tests/conftest.py 0000666 0000000 0000000 00000000622 13362264463 015412 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# tifffile/tests/conftest.py
collect_ignore = ['_tmp', 'data']
def pytest_report_header(config):
try:
import numpy
import tifffile
import imagecodecs
return 'versions: tifffile-%s, imagecodecs-%s, numpy-%s' % (
tifffile.__version__, imagecodecs.__version__, numpy.__version__)
except Exception:
pass
tifffile-2018.11.28/tests/test_tifffile.py 0000666 0000000 0000000 00001126704 13400334076 016416 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# test_tifffile.py
# Copyright (c) 2008-2018, Christoph Gohlke
# Copyright (c) 2008-2018, The Regents of the University of California
# Produced at the Laboratory for Fluorescence Dynamics
# All rights reserved.
#
# Copyright (c) 2008-2018, Christoph Gohlke
# Copyright (c) 2008-2018, The Regents of the University of California
# Produced at the Laboratory for Fluorescence Dynamics
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Unittests for the tifffile package.
Data files are not public due to size and copyright restrictions.
:Author:
`Christoph Gohlke `_
:Organization:
Laboratory for Fluorescence Dynamics. University of California, Irvine
:Version: 2018.11.28
"""
from __future__ import division, print_function
import os
import sys
import glob
import json
import math
import struct
import pathlib
import binascii
import datetime
import tempfile
from io import BytesIO
import pytest
import numpy
from numpy.testing import assert_array_equal, assert_array_almost_equal
import tifffile
try:
from tifffile import * # noqa
STAR_IMPORTED = (
imwrite, imsave, imread, imshow, # noqa
TiffFile, TiffWriter, TiffSequence, # noqa
FileHandle, lazyattr, natural_sorted, stripnull, memmap, # noqa
repeat_nd, format_size, product, create_output) # noqa
except NameError:
STAR_IMPORTED = None
from tifffile.tifffile import ( # noqa
TIFF,
imwrite, imread, imshow, TiffFile, TiffWriter, TiffSequence, FileHandle,
lazyattr, natural_sorted, stripnull, memmap, format_size,
repeat_nd, TiffPage, TiffFrame,
julian_datetime, excel_datetime, squeeze_axes, transpose_axes, unpack_rgb,
stripascii, sequence, product,
imagej_description_metadata, imagej_description, imagej_shape,
json_description, json_description_metadata,
scanimage_description_metadata, scanimage_artist_metadata,
svs_description_metadata, pilatus_description_metadata,
metaseries_description_metadata, fluoview_description_metadata,
reshape_axes, apply_colormap, askopenfilename,
reshape_nd, read_scanimage_metadata, matlabstr2py, bytes2str,
pformat, snipstr, byteorder_isnative,
lsm2bin, create_output, hexdump, validate_jhove)
# allow to skip large (memory, number, duration) tests
SKIP_HUGE = False
SKIP_EXTENDED = False
SKIP_ROUNDTRIPS = False
SKIP_DATA = False
VALIDATE = False # validate written files with jhove
MINISBLACK = TIFF.PHOTOMETRIC.MINISBLACK
MINISWHITE = TIFF.PHOTOMETRIC.MINISWHITE
RGB = TIFF.PHOTOMETRIC.RGB
CFA = TIFF.PHOTOMETRIC.CFA
PALETTE = TIFF.PHOTOMETRIC.PALETTE
YCBCR = TIFF.PHOTOMETRIC.YCBCR
CONTIG = TIFF.PLANARCONFIG.CONTIG
SEPARATE = TIFF.PLANARCONFIG.SEPARATE
LZW = TIFF.COMPRESSION.LZW
LZMA = TIFF.COMPRESSION.LZMA
ZSTD = TIFF.COMPRESSION.ZSTD
WEBP = TIFF.COMPRESSION.WEBP
PACKBITS = TIFF.COMPRESSION.PACKBITS
JPEG = TIFF.COMPRESSION.JPEG
APERIO_JP2000_RGB = TIFF.COMPRESSION.APERIO_JP2000_RGB
ADOBE_DEFLATE = TIFF.COMPRESSION.ADOBE_DEFLATE
DEFLATE = TIFF.COMPRESSION.DEFLATE
NONE = TIFF.COMPRESSION.NONE
LSB2MSB = TIFF.FILLORDER.LSB2MSB
ASSOCALPHA = TIFF.EXTRASAMPLE.ASSOCALPHA
UNASSALPHA = TIFF.EXTRASAMPLE.UNASSALPHA
UNSPECIFIED = TIFF.EXTRASAMPLE.UNSPECIFIED
IS_PY2 = sys.version_info[0] == 2
IS_32BIT = sys.maxsize < 2**32
FILE_FLAGS = ['is_' + a for a in TIFF.FILE_FLAGS]
FILE_FLAGS += [name for name in dir(TiffFile) if name.startswith('is_')]
PAGE_FLAGS = [name for name in dir(TiffPage) if name.startswith('is_')]
HERE = os.path.dirname(__file__)
TEMPDIR = os.path.join(HERE, '_tmp')
DATADIR = os.path.join(HERE, 'data')
if not os.path.exists(TEMPDIR):
TEMPDIR = tempfile.gettempdir()
if not os.path.exists(DATADIR):
SKIP_DATA = True # TODO: decorate tests
def data_file(pathname, base=DATADIR):
"""Return path to data file(s)."""
path = os.path.join(base, *pathname.split('/'))
if any(i in path for i in '*?'):
return glob.glob(path)
return path
def random_data(dtype, shape):
"""Return random numpy array."""
# TODO: use nd noise
if dtype == '?':
return numpy.random.rand(*shape) < 0.5
data = numpy.random.rand(*shape) * 255
data = data.astype(dtype)
return data
def assert_file_flags(tiff_file):
"""Access all flags of TiffFile."""
for flag in FILE_FLAGS:
getattr(tiff_file, flag)
def assert_page_flags(tiff_page):
"""Access all flags of TiffPage."""
for flag in PAGE_FLAGS:
getattr(tiff_page, flag)
def assert__str__(tif, detail=3):
"""Call the TiffFile.__str__ function."""
for i in range(detail+1):
TiffFile.__str__(tif, detail=i)
if VALIDATE:
def assert_jhove(filename, *args, **kwargs):
"""Validate TIFF file using jhove script."""
validate_jhove(filename, 'jhove.cmd', *args, **kwargs)
else:
def assert_jhove(*args, **kwargs):
"""Do not validate TIFF file."""
return
class TempFileName():
"""Temporary file name context manager."""
def __init__(self, name=None, ext='.tif', remove=False):
self.remove = remove or TEMPDIR == tempfile.gettempdir()
if not name:
self.name = tempfile.NamedTemporaryFile(prefix='test_').name
else:
self.name = os.path.join(TEMPDIR, "test_%s%s" % (name, ext))
def __enter__(self):
return self.name
def __exit__(self, exc_type, exc_value, traceback):
if self.remove:
try:
os.remove(self.name)
except Exception:
pass
try:
numpy.set_printoptions(legacy='1.13')
except TypeError:
pass
###############################################################################
# Tests for specific issues
def test_issue_star_import():
"""Test from tifffile import *."""
assert STAR_IMPORTED is not None
assert lsm2bin not in STAR_IMPORTED
def test_issue_version_mismatch():
"""Test 'tifffile.__version__' matches docstrings."""
ver = ':Version: ' + tifffile.__version__
assert ver in __doc__
assert ver in tifffile.__doc__
def test_issue_specific_pages():
"""Test read second page."""
data = random_data('uint8', (3, 21, 31))
with TempFileName('specific_pages') as fname:
imwrite(fname, data, photometric='MINISBLACK')
a = imread(fname)
assert a.shape == (3, 21, 31)
# UserWarning: can not reshape (21, 31) to (3, 21, 31)
a = imread(fname, key=1)
assert a.shape == (21, 31)
assert_array_equal(a, data[1])
with TempFileName('specific_pages_bigtiff') as fname:
imwrite(fname, data, bigtiff=True, photometric='MINISBLACK')
a = imread(fname)
assert a.shape == (3, 21, 31)
# UserWarning: can not reshape (21, 31) to (3, 21, 31)
a = imread(fname, key=1)
assert a.shape == (21, 31)
assert_array_equal(a, data[1])
def test_issue_circular_ifd():
"""Test circular IFD raises error."""
fname = data_file('Tiff4J/IFD struct/Circular E.tif')
with pytest.raises(IndexError):
imread(fname)
@pytest.mark.skipif(IS_PY2, reason='fails on Python 2')
def test_issue_bad_description(caplog):
"""Test page.description is empty when ImageDescription is not ASCII."""
# ImageDescription is not ASCII but bytes
fname = data_file('stk/cells in the eye2.stk')
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.description == ''
assert__str__(tif)
assert 'coercing invalid ASCII to bytes' in caplog.text
@pytest.mark.skipif(IS_PY2, reason='fails on Python 2')
def test_issue_bad_ascii(caplog):
"""Test coercing invalid ASCII to bytes."""
# ImageID is not ASCII but bytes
# https://github.com/blink1073/tifffile/pull/38
fname = data_file('issues/tifffile_013_tagfail.tif')
with TiffFile(fname) as tif:
tags = tif.pages[0].tags
assert tags['ImageID'].value[-8:] == b'rev 2893'
assert__str__(tif)
assert 'coercing invalid ASCII to bytes' in caplog.text
def test_issue_sample_format():
"""Test write correct number of SampleFormat values."""
# https://github.com/ngageoint/geopackage-tiff-java/issues/5
data = random_data('uint16', (256, 256, 4))
with TempFileName('sample_format') as fname:
imwrite(fname, data)
with TiffFile(fname) as tif:
tags = tif.pages[0].tags
assert tags['SampleFormat'].value == (1, 1, 1, 1)
assert tags['ExtraSamples'].value == 2
assert__str__(tif)
def test_issue_palette_with_extrasamples():
"""Test read palette with extra samples."""
# https://github.com/python-pillow/Pillow/issues/1597
fname = data_file('issues/palette_with_extrasamples.tif')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.photometric == PALETTE
assert page.compression == LZW
assert page.imagewidth == 518
assert page.imagelength == 556
assert page.bitspersample == 8
assert page.samplesperpixel == 2
# assert data
image = page.asrgb()
assert image.shape == (556, 518, 3)
assert image.dtype == 'uint16'
image = tif.asarray()
# self.assertEqual(image.shape[-3:], (556, 518, 2))
assert image.shape == (556, 518, 2)
assert image.dtype == 'uint8'
del image
assert__str__(tif)
def test_issue_incorrect_rowsperstrip_count():
"""Test read incorrect count for rowsperstrip; bitspersample = 4."""
# https://github.com/python-pillow/Pillow/issues/1544
fname = data_file('bad/incorrect_count.tiff')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.photometric == PALETTE
assert page.compression == ADOBE_DEFLATE
assert page.imagewidth == 32
assert page.imagelength == 32
assert page.bitspersample == 4
assert page.samplesperpixel == 1
assert page.rowsperstrip == 32
assert page.databytecounts == (89,)
# assert data
image = page.asrgb()
assert image.shape == (32, 32, 3)
del image
assert__str__(tif)
def test_issue_no_bytecounts(caplog):
"""Test read no bytecounts."""
with TiffFile(data_file('bad/img2_corrupt.tif')) as tif:
assert not tif.is_bigtiff
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.databytecounts == (0,)
assert page.dataoffsets == (512,)
# assert data
image = tif.asarray()
assert image.shape == (800, 1200)
assert 'invalid tag value offset' in caplog.text
assert 'unknown tag data type 31073' in caplog.text
assert 'invalid page offset (808333686)' in caplog.text
def test_issue_missing_eoi_in_strips():
"""Test read LZW strips without EOI."""
# 256x256 uint16, lzw, imagej
# Strips do not contain an EOI code as required by the TIFF spec.
# File generated by `tiffcp -c lzw Z*.tif stack.tif` from
# Bars-G10-P15.zip
# Failed with "series 0 failed: string size must be a multiple of
# element size"
# Reported by Kai Wohlfahrt on 3/7/2014
fname = data_file('issues/stack.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '<'
assert len(tif.pages) == 128
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 256
assert page.imagelength == 256
assert page.bitspersample == 16
# assert series properties
series = tif.series[0]
assert series.shape == (128, 256, 256)
assert series.dtype.name == 'uint16'
assert series.axes == 'IYX'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.41e'
# assert data
data = tif.asarray()
assert data.shape == (128, 256, 256)
assert data.dtype.name == 'uint16'
assert data[64, 128, 128] == 19226
del data
assert__str__(tif)
def test_issue_valueoffset():
"""Test read TiffTag.valueoffsets."""
unpack = struct.unpack
data = random_data('uint16', (2, 19, 31))
software = 'test_tifffile'
with TempFileName('valueoffset') as fname:
imwrite(fname, data, software=software, photometric='minisblack')
with TiffFile(fname, movie=True) as tif:
with open(fname, 'rb') as fh:
page = tif.pages[0]
# inline value
fh.seek(page.tags['ImageLength'].valueoffset)
assert page.imagelength == unpack('H', fh.read(2))[0]
# separate value
fh.seek(page.tags['Software'].valueoffset)
assert page.software == bytes2str(fh.read(13))
# TiffFrame
page = tif.pages[1].aspage()
fh.seek(page.tags['StripOffsets'].valueoffset)
assert page.dataoffsets[0] == unpack('I', fh.read(4))[0]
def test_issue_pages_number():
"""Test number of pages."""
fname = data_file('large/100000_pages.tif')
with TiffFile(fname) as tif:
assert len(tif.pages) == 100000
assert__str__(tif, 0)
def test_issue_pages_iterator():
"""Test iterating over pages in series."""
data = random_data('int8', (8, 219, 301))
with TempFileName('page_iterator') as fname:
imwrite(fname, data[0])
imwrite(fname, data, photometric='minisblack', append=True,
metadata={'axes': 'ZYX'})
imwrite(fname, data[-1], append=True)
with TiffFile(fname) as tif:
assert len(tif.pages) == 10
assert len(tif.series) == 3
page = tif.pages[1]
assert page.is_contiguous
assert page.photometric == MINISBLACK
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 1
# test reading series 1
series = tif.series[1]
assert len(series._pages) == 1
assert len(series.pages) == 8
image = series.asarray()
assert_array_equal(data, image)
for i, page in enumerate(series.pages):
im = page.asarray()
assert_array_equal(image[i], im)
assert__str__(tif)
def test_issue_pathlib():
"""Test support for pathlib.Path."""
data = random_data('uint16', (219, 301))
with TempFileName('pathlib') as fname:
fname = pathlib.Path(fname)
imwrite(fname, data)
with TiffFile(fname) as tif:
with TempFileName('pathlib_out') as outfname:
outfname = pathlib.Path(outfname)
im = tif.asarray(out=outfname)
assert isinstance(im, numpy.core.memmap)
assert_array_equal(im, data)
if not IS_PY2:
assert os.path.samefile(im.filename, str(outfname))
###############################################################################
# Test specific functions
def test_func_memmap():
"""Test memmap function."""
with TempFileName('memmap_new') as fname:
# create new file
im = memmap(fname, shape=(32, 16), dtype='float32',
bigtiff=True, compress=False)
im[31, 15] = 1.0
im.flush()
assert im.shape == (32, 16)
assert im.dtype == numpy.dtype('float32')
del im
im = memmap(fname, page=0, mode='r')
assert im[31, 15] == 1.0
del im
im = memmap(fname, series=0, mode='c')
assert im[31, 15] == 1.0
del im
# append to file
im = memmap(fname, shape=(3, 64, 64), dtype='uint16',
append=True, photometric='MINISBLACK')
im[2, 63, 63] = 1.0
im.flush()
assert im.shape == (3, 64, 64)
assert im.dtype == numpy.dtype('uint16')
del im
im = memmap(fname, page=3, mode='r')
assert im[63, 63] == 1
del im
im = memmap(fname, series=1, mode='c')
assert im[2, 63, 63] == 1
del im
# can not memory-map compressed array
with pytest.raises(ValueError):
memmap(fname, shape=(16, 16), dtype='float32',
append=True, compress=6)
def test_func_memmap_fail():
"""Test non-native byteorder can not be memory mapped."""
with TempFileName('memmap_fail') as fname:
with pytest.raises(ValueError):
memmap(fname, shape=(16, 16), dtype='float32', byteorder='>')
def test_func_repeat_nd():
"""Test repeat_nd function."""
a = repeat_nd([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]], (2, 3))
assert_array_equal(a, [[0, 0, 0, 1, 1, 1, 2, 2, 2],
[0, 0, 0, 1, 1, 1, 2, 2, 2],
[3, 3, 3, 4, 4, 4, 5, 5, 5],
[3, 3, 3, 4, 4, 4, 5, 5, 5],
[6, 6, 6, 7, 7, 7, 8, 8, 8],
[6, 6, 6, 7, 7, 7, 8, 8, 8]])
def test_func_byteorder_isnative():
"""Test byteorder_isnative function."""
assert not byteorder_isnative('>')
assert byteorder_isnative('<')
assert byteorder_isnative('=')
assert byteorder_isnative(sys.byteorder)
def test_func_reshape_nd():
"""Test reshape_nd function."""
assert reshape_nd(numpy.empty(0), 2).shape == (1, 0)
assert reshape_nd(numpy.empty(1), 3).shape == (1, 1, 1)
assert reshape_nd(numpy.empty((2, 3)), 3).shape == (1, 2, 3)
assert reshape_nd(numpy.empty((2, 3, 4)), 3).shape == (2, 3, 4)
assert reshape_nd((0,), 2) == (1, 0)
assert reshape_nd((1,), 3) == (1, 1, 1)
assert reshape_nd((2, 3), 3) == (1, 2, 3)
assert reshape_nd((2, 3, 4), 3) == (2, 3, 4)
def test_func_apply_colormap():
"""Test apply_colormap function."""
image = numpy.arange(256, dtype='uint8')
colormap = numpy.vstack([image, image, image]).astype('uint16') * 256
assert_array_equal(apply_colormap(image, colormap)[-1], colormap[:, -1])
def test_func_reshape_axes():
"""Test reshape_axes function."""
assert reshape_axes('YXS', (219, 301, 1), (219, 301, 1)) == 'YXS'
assert reshape_axes('YXS', (219, 301, 3), (219, 301, 3)) == 'YXS'
assert reshape_axes('YXS', (219, 301, 1), (219, 301)) == 'YX'
assert reshape_axes('YXS', (219, 301, 1), (219, 1, 1, 301, 1)) == 'YQQXS'
assert reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 301, 1)) == 'QQYXQ'
assert reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 1, 301, 1)
) == 'QQYQXQ'
assert reshape_axes('IYX', (12, 219, 301), (3, 2, 219, 2, 301, 1)
) == 'QQQQXQ'
with pytest.raises(ValueError):
reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 2, 301, 1))
with pytest.raises(ValueError):
reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 301, 2))
def test_func_julian_datetime():
"""Test julian_datetime function."""
assert julian_datetime(2451576, 54362783) == (
datetime.datetime(2000, 2, 2, 15, 6, 2, 783))
def test_func_excel_datetime():
"""Test excel_datetime function."""
assert excel_datetime(40237.029999999795) == (
datetime.datetime(2010, 2, 28, 0, 43, 11, 999982))
def test_func_natural_sorted():
"""Test natural_sorted function."""
assert natural_sorted(['f1', 'f2', 'f10']) == ['f1', 'f2', 'f10']
def test_func_stripnull():
"""Test stripnull function."""
assert stripnull(b'string\x00') == b'string'
def test_func_stripascii():
"""Test stripascii function."""
assert stripascii(b'string\x00string\n\x01\x00') == b'string\x00string\n'
assert stripascii(b'\x00') == b''
def test_func_sequence():
"""Test sequence function."""
assert sequence(1) == (1,)
assert sequence([1]) == [1]
def test_func_product():
"""Test product function."""
assert product([2**8, 2**30]) == 274877906944
assert product([]) == 1
def test_func_squeeze_axes():
"""Test squeeze_axes function."""
assert squeeze_axes((5, 1, 2, 1, 1), 'TZYXC') == ((5, 2, 1), 'TYX')
def test_func_transpose_axes():
"""Test transpose_axes function."""
assert transpose_axes(numpy.zeros((2, 3, 4, 5)), 'TYXC',
asaxes='CTZYX').shape == (5, 2, 1, 3, 4)
def test_func_unpack_rgb():
"""Test unpack_rgb function."""
data = struct.pack('BBBB', 0x21, 0x08, 0xff, 0xff)
assert_array_equal(unpack_rgb(data, '
Unknown = unknown
% Comment
""")
assert p['Array'] == [1, 2]
assert p['Array.2D'] == [[1], [2]]
assert p['Array.Empty'] == []
assert p['Cell'] == ['', '']
assert p['Class'] == '@class'
assert p['False'] is False
assert p['Filename'] == 'C:\\Users\\scanimage.cfg'
assert p['Float'] == 3.14
assert p['Float.E'] == 3.14
assert p['Float.Inf'] == float('inf')
# self.assertEqual(p['Float.NaN'], float('nan')) # can't compare NaN
assert p['Int'] == 10
assert p['StructObject'] == ''
assert p['Ones'] == [[]]
assert p['String'] == 'string'
assert p['String.Array'] == 'ab'
assert p['String.Empty'] == ''
assert p['Transform'] == [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
assert p['True'] is True
assert p['Unknown'] == 'unknown'
assert p['Zeros'] == [[0.0]]
assert p['Zeros.Empty'] == [[]]
assert p['false'] is False
assert p['true'] is True
def test_func_hexdump():
"""Test hexdump function."""
# test hexdump function
data = binascii.unhexlify(
'49492a00080000000e00fe0004000100'
'00000000000000010400010000000001'
'00000101040001000000000100000201'
'03000100000020000000030103000100')
# one line
assert hexdump(data[:16]) == (
'49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 II*.............')
# height=1
assert hexdump(data, width=64, height=1) == (
'49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 II*.............')
# all lines
assert hexdump(data) == (
'00: 49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 '
'II*.............\n'
'10: 00 00 00 00 00 00 00 01 04 00 01 00 00 00 00 01 '
'................\n'
'20: 00 00 01 01 04 00 01 00 00 00 00 01 00 00 02 01 '
'................\n'
'30: 03 00 01 00 00 00 20 00 00 00 03 01 03 00 01 00 '
'...... .........')
# skip center
assert hexdump(data, height=3, snipat=0.5) == (
'00: 49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 '
'II*.............\n'
'...\n'
'30: 03 00 01 00 00 00 20 00 00 00 03 01 03 00 01 00 '
'...... .........')
# skip start
assert hexdump(data, height=3, snipat=0) == (
'10: 00 00 00 00 00 00 00 01 04 00 01 00 00 00 00 01 '
'................\n'
'20: 00 00 01 01 04 00 01 00 00 00 00 01 00 00 02 01 '
'................\n'
'30: 03 00 01 00 00 00 20 00 00 00 03 01 03 00 01 00 '
'...... .........')
# skip end
assert hexdump(data, height=3, snipat=1) == (
'00: 49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 '
'II*.............\n'
'10: 00 00 00 00 00 00 00 01 04 00 01 00 00 00 00 01 '
'................\n'
'20: 00 00 01 01 04 00 01 00 00 00 00 01 00 00 02 01 '
'................')
def test_func_snipstr():
"""Test snipstr function."""
# cut middle
assert snipstr(u'abc', 3, ellipsis='...') == u'abc'
assert snipstr(u'abc', 3, ellipsis='....') == u'abc'
assert snipstr(u'abcdefg', 4, ellipsis='') == u'abcd'
assert snipstr(u'abcdefg', 4, ellipsis=None) == u'abc…'
assert snipstr(b'abcdefg', 4, ellipsis=None) == b'a...'
assert snipstr(u'abcdefghijklmnop', 8, ellipsis=None) == u'abcd…nop'
assert snipstr(b'abcdefghijklmnop', 8, ellipsis=None) == b'abc...op'
assert snipstr(u'abcdefghijklmnop', 9, ellipsis=None) == u'abcd…mnop'
assert snipstr(b'abcdefghijklmnop', 9, ellipsis=None) == b'abc...nop'
assert snipstr('abcdefghijklmnop', 8, ellipsis='..') == 'abc..nop'
assert snipstr('abcdefghijklmnop', 8, ellipsis='....') == 'ab....op'
assert snipstr('abcdefghijklmnop', 8, ellipsis='......') == 'ab......'
# cut right
assert snipstr(u'abc', 3, snipat=1, ellipsis='...') == u'abc'
assert snipstr(u'abc', 3, snipat=1, ellipsis='....') == u'abc'
assert snipstr(u'abcdefg', 4, snipat=1, ellipsis='') == u'abcd'
assert snipstr(u'abcdefg', 4, snipat=1, ellipsis=None) == u'abc…'
assert snipstr(b'abcdefg', 4, snipat=1, ellipsis=None) == b'a...'
assert snipstr(
u'abcdefghijklmnop', 8, snipat=1, ellipsis=None) == u'abcdefg…'
assert snipstr(
b'abcdefghijklmnop', 8, snipat=1, ellipsis=None) == b'abcde...'
assert snipstr(
u'abcdefghijklmnop', 9, snipat=1, ellipsis=None) == u'abcdefgh…'
assert snipstr(
b'abcdefghijklmnop', 9, snipat=1, ellipsis=None) == b'abcdef...'
assert snipstr(
'abcdefghijklmnop', 8, snipat=1, ellipsis='..') == 'abcdef..'
assert snipstr(
'abcdefghijklmnop', 8, snipat=1, ellipsis='....') == 'abcd....'
assert snipstr(
'abcdefghijklmnop', 8, snipat=1, ellipsis='......') == 'ab......'
# cut left
assert snipstr(u'abc', 3, snipat=0, ellipsis='...') == u'abc'
assert snipstr(u'abc', 3, snipat=0, ellipsis='....') == u'abc'
assert snipstr(u'abcdefg', 4, snipat=0, ellipsis='') == u'defg'
assert snipstr(u'abcdefg', 4, snipat=0, ellipsis=None) == u'…efg'
assert snipstr(b'abcdefg', 4, snipat=0, ellipsis=None) == b'...g'
assert snipstr(
u'abcdefghijklmnop', 8, snipat=0, ellipsis=None) == u'…jklmnop'
assert snipstr(
b'abcdefghijklmnop', 8, snipat=0, ellipsis=None) == b'...lmnop'
assert snipstr(
u'abcdefghijklmnop', 9, snipat=0, ellipsis=None) == u'…ijklmnop'
assert snipstr(
b'abcdefghijklmnop', 9, snipat=0, ellipsis=None) == b'...klmnop'
assert snipstr(
'abcdefghijklmnop', 8, snipat=0, ellipsis='..') == '..klmnop'
assert snipstr(
'abcdefghijklmnop', 8, snipat=0, ellipsis='....') == '....mnop'
assert snipstr(
'abcdefghijklmnop', 8, snipat=0, ellipsis='......') == '......op'
def test_func_pformat_printable_bytes():
"""Test pformat function with printable bytes."""
value = (b'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRST'
b'UVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c')
assert pformat(value, height=1, width=60) == (
'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWX')
assert pformat(value, height=8, width=60) == (
r'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!'
r""""#$%&'()*+,-./:;<=>?@[\]^_`{|}~""")
def test_func_pformat_printable_unicode():
"""Test pformat function with printable unicode."""
value = (u'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRST'
u'UVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c')
assert pformat(value, height=1, width=60) == (
'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWX')
assert pformat(value, height=8, width=60) == (
r'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!'
r""""#$%&'()*+,-./:;<=>?@[\]^_`{|}~""")
def test_func_pformat_hexdump():
"""Test pformat function with unprintable bytes."""
value = binascii.unhexlify('49492a00080000000e00fe0004000100'
'00000000000000010400010000000001'
'00000101040001000000000100000201'
'03000100000020000000030103000100')
assert pformat(value, height=1, width=60) == (
'49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 II*............')
assert pformat(value, height=8, width=70) == """
00: 49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 II*.............
10: 00 00 00 00 00 00 00 01 04 00 01 00 00 00 00 01 ................
20: 00 00 01 01 04 00 01 00 00 00 00 01 00 00 02 01 ................
30: 03 00 01 00 00 00 20 00 00 00 03 01 03 00 01 00 ...... .........
""".strip()
def test_func_pformat_dict():
"""Test pformat function with dict."""
value = {'GTCitationGeoKey': 'WGS 84 / UTM zone 29N',
'GTModelTypeGeoKey': 1,
'GTRasterTypeGeoKey': 1,
'KeyDirectoryVersion': 1,
'KeyRevision': 1,
'KeyRevisionMinor': 2,
'ModelTransformation': numpy.array([
[6.00000e+01, 0.00000e+00, 0.00000e+00, 6.00000e+05],
[0.00000e+00, -6.00000e+01, 0.00000e+00, 5.90004e+06],
[0.00000e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00],
[0.00000e+00, 0.00000e+00, 0.00000e+00, 1.00000e+00]]),
'PCSCitationGeoKey': 'WGS 84 / UTM zone 29N',
'ProjectedCSTypeGeoKey': 32629}
assert pformat(value, height=1, width=60) == (
"{'GTCitationGeoKey': 'WGS 84 / UTM zone 29N', 'GTModelTypeGe")
assert pformat(value, height=8, width=60) == (
"""{'GTCitationGeoKey': 'WGS 84 / UTM zone 29N',
'GTModelTypeGeoKey': 1,
'GTRasterTypeGeoKey': 1,
'KeyDirectoryVersion': 1,
...
[ 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 1.00000000e+00]]),
'PCSCitationGeoKey': 'WGS 84 / UTM zone 29N',
'ProjectedCSTypeGeoKey': 32629}""")
@pytest.mark.skipif(IS_PY2, reason='fails on Python 2')
def test_func_pformat_list():
"""Test pformat function with list."""
value = (60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.,
60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.)
assert pformat(value, height=1, width=60) == (
'(60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0, 60.0,')
assert pformat(value, height=8, width=60) == (
'(60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0, 60.0,\n'
' 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.0)')
def test_func_pformat_numpy():
"""Test pformat function with numpy array."""
value = numpy.array(
(60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.,
60.0, 0.0, 0.0, 600000.0, 0.0, -60.0, 0.0, 5900040.))
assert pformat(value, height=1, width=60) == (
'array([6.00000000e+01, 0.00000000e+00, 0.00000000e+00, 6.000')
assert pformat(value, height=8, width=60) == (
"""array([ 6.00000000e+01, 0.00000000e+00, 0.00000000e+00,
6.00000000e+05, 0.00000000e+00, -6.00000000e+01,
0.00000000e+00, 5.90004000e+06, 6.00000000e+01,
0.00000000e+00, 0.00000000e+00, 6.00000000e+05,
0.00000000e+00, -6.00000000e+01, 0.00000000e+00,
5.90004000e+06])""")
def test_func_pformat_xml():
"""Test pformat function with XML."""
value = """
DIMAP
BEAM-DATAMODEL-V1
0
"""
assert pformat(value, height=1, width=60) == (
'
DIMAP
...
0
""")
@pytest.mark.skipif(IS_32BIT, reason='not enough memory')
@pytest.mark.skipif(SKIP_HUGE, reason='huge image')
def test_func_lsm2bin():
"""Test lsm2bin function."""
# Converst LSM to BIN
fname = data_file('lsm/Twoareas_Zstacks54slices_3umintervals_5cycles.lsm')
# fname = data_file(
# 'LSM/fish01-wt-t01-10_ForTest-20zplanes10timepoints.lsm')
lsm2bin(fname, '', verbose=True)
def test_func_create_output():
"""Test create_output function."""
shape = (16, 17)
dtype = 'uint16'
# None
a = create_output(None, shape, dtype)
assert_array_equal(a, numpy.zeros(shape, dtype))
# existing array
b = create_output(a, a.shape, a.dtype)
assert a is b.base
# 'memmap'
a = create_output('memmap', shape, dtype)
assert isinstance(a, numpy.core.memmap)
del a
# 'memmap:tempdir'
a = create_output('memmap:%s' % os.path.abspath(TEMPDIR), shape, dtype)
assert isinstance(a, numpy.core.memmap)
del a
# filename
with TempFileName('nopages') as fname:
a = create_output(fname, shape, dtype)
del a
@pytest.mark.parametrize('key', [None, 0, 3, 'series'])
@pytest.mark.parametrize('out', [None, 'empty', 'memmap', 'dir', 'name'])
def test_func_create_output_asarray(out, key):
"""Test create_output function in context of asarray."""
data = random_data('uint16', (5, 219, 301))
with TempFileName('out') as fname:
imwrite(fname, data)
# assert file
with TiffFile(fname) as tif:
tif.pages.useframes = True
tif.pages.load()
if key is None:
# default
obj = tif
dat = data
elif key == 'series':
# series
obj = tif.series[0]
dat = data
else:
# single page/frame
obj = tif.pages[key]
dat = data[key]
if key == 0:
assert isinstance(obj, TiffPage)
elif not IS_PY2:
assert isinstance(obj, TiffFrame)
if out is None:
# new array
image = obj.asarray(out=None)
assert_array_equal(dat, image)
del image
elif out == 'empty':
# existing array
image = numpy.empty_like(dat)
obj.asarray(out=image)
assert_array_equal(dat, image)
del image
elif out == 'memmap':
# memmap in temp dir
image = obj.asarray(out='memmap')
assert isinstance(image, numpy.core.memmap)
assert_array_equal(dat, image)
del image
elif out == 'dir':
# memmap in specified dir
tempdir = os.path.dirname(fname)
image = obj.asarray(out='memmap:%s' % tempdir)
assert isinstance(image, numpy.core.memmap)
assert_array_equal(dat, image)
del image
elif out == 'name':
# memmap in specified file
with TempFileName('out', ext='.memmap') as fileout:
image = obj.asarray(out=fileout)
assert isinstance(image, numpy.core.memmap)
assert_array_equal(dat, image)
del image
###############################################################################
# Test FileHandle class
FILEHANDLE_NAME = data_file('test_FileHandle.bin')
FILEHANDLE_SIZE = 7937381
FILEHANDLE_OFFSET = 333
FILEHANDLE_LENGTH = 7937381 - 666
def create_filehandle_file():
"""Write test_FileHandle.bin file."""
# array start 999
# array end 1254
# recarray start 2253
# recarray end 6078
# tiff start 7077
# tiff end 12821
# mm offset = 13820
# mm size = 7936382
with open(FILEHANDLE_NAME, 'wb') as fh:
# buffer
numpy.ones(999, dtype='uint8').tofile(fh)
# array
print('array start', fh.tell())
numpy.arange(255, dtype='uint8').tofile(fh)
print('array end', fh.tell())
# buffer
numpy.ones(999, dtype='uint8').tofile(fh)
# recarray
print('recarray start', fh.tell())
a = numpy.recarray((255, 3),
dtype=[('x', 'float32'), ('y', 'uint8')])
for i in range(3):
a[:, i].x = numpy.arange(255, dtype='float32')
a[:, i].y = numpy.arange(255, dtype='uint8')
a.tofile(fh)
print('recarray end', fh.tell())
# buffer
numpy.ones(999, dtype='uint8').tofile(fh)
# tiff
print('tiff start', fh.tell())
with open('Tests/vigranumpy.tif', 'rb') as tif:
fh.write(tif.read())
print('tiff end', fh.tell())
# buffer
numpy.ones(999, dtype='uint8').tofile(fh)
# micromanager
print('micromanager start', fh.tell())
with open('Tests/micromanager/micromanager.tif', 'rb') as tif:
fh.write(tif.read())
print('micromanager end', fh.tell())
# buffer
numpy.ones(999, dtype='uint8').tofile(fh)
def assert_filehandle(fh, offset=0):
"""Assert filehandle can read test_FileHandle.bin."""
size = FILEHANDLE_SIZE - 2*offset
pad = 999 - offset
assert fh.size == size
assert fh.tell() == 0
assert fh.read(4) == b'\x01\x01\x01\x01'
fh.seek(pad-4)
assert fh.tell() == pad-4
assert fh.read(4) == b'\x01\x01\x01\x01'
fh.seek(-4, whence=1)
assert fh.tell() == pad-4
assert fh.read(4) == b'\x01\x01\x01\x01'
fh.seek(-pad, whence=2)
assert fh.tell() == size-pad
assert fh.read(4) == b'\x01\x01\x01\x01'
# assert array
fh.seek(pad, whence=0)
assert fh.tell() == pad
assert_array_equal(fh.read_array('uint8', 255),
numpy.arange(255, dtype='uint8'))
# assert records
fh.seek(999, whence=1)
assert fh.tell() == 2253-offset
records = fh.read_record([('x', 'float32'), ('y', 'uint8')], (255, 3))
assert_array_equal(records.y[:, 0], range(255))
assert_array_equal(records.x, records.y)
# assert memmap
if fh.is_file:
assert_array_equal(fh.memmap_array('uint8', 255, pad),
numpy.arange(255, dtype='uint8'))
def test_filehandle_seekable():
"""Test FileHandle must be seekable."""
try:
from urllib2 import build_opener
except ImportError:
from urllib.request import build_opener
opener = build_opener()
opener.addheaders = [('User-Agent', 'test_tifffile.py')]
fh = opener.open('https://download.lfd.uci.edu/pythonlibs/test.tif')
with pytest.raises(ValueError):
FileHandle(fh)
def test_filehandle_write_bytesio():
"""Test write to FileHandle from BytesIO."""
value = b'123456789'
buf = BytesIO()
with FileHandle(buf) as fh:
fh.write(value)
buf.seek(0)
assert buf.read() == value
def test_filehandle_write_bytesio_offset():
"""Test write to FileHandle from BytesIO with offset."""
pad = b'abcd'
value = b'123456789'
buf = BytesIO()
buf.write(pad)
with FileHandle(buf) as fh:
fh.write(value)
buf.write(pad)
# assert buffer
buf.seek(len(pad))
assert buf.read(len(value)) == value
buf.seek(2)
with FileHandle(buf, offset=len(pad), size=len(value)) as fh:
assert fh.read(len(value)) == value
def test_filehandle_filename():
"""Test FileHandle from filename."""
with FileHandle(FILEHANDLE_NAME) as fh:
assert fh.name == "test_FileHandle.bin"
assert fh.is_file
assert_filehandle(fh)
def test_filehandle_filename_offset():
"""Test FileHandle from filename with offset."""
with FileHandle(FILEHANDLE_NAME, offset=FILEHANDLE_OFFSET,
size=FILEHANDLE_LENGTH) as fh:
assert fh.name == "test_FileHandle.bin"
assert fh.is_file
assert_filehandle(fh, FILEHANDLE_OFFSET)
def test_filehandle_bytesio():
"""Test FileHandle from BytesIO."""
with open(FILEHANDLE_NAME, 'rb') as fh:
stream = BytesIO(fh.read())
with FileHandle(stream) as fh:
assert fh.name == "Unnamed binary stream"
assert not fh.is_file
assert_filehandle(fh)
def test_filehandle_bytesio_offset():
"""Test FileHandle from BytesIO with offset."""
with open(FILEHANDLE_NAME, 'rb') as fh:
stream = BytesIO(fh.read())
with FileHandle(stream, offset=FILEHANDLE_OFFSET,
size=FILEHANDLE_LENGTH) as fh:
assert fh.name == "Unnamed binary stream"
assert not fh.is_file
assert_filehandle(fh, offset=FILEHANDLE_OFFSET)
def test_filehandle_openfile():
"""Test FileHandle from open file."""
with open(FILEHANDLE_NAME, 'rb') as fhandle:
with FileHandle(fhandle) as fh:
assert fh.name == "test_FileHandle.bin"
assert fh.is_file
assert_filehandle(fh)
assert not fhandle.closed
def test_filehandle_openfile_offset():
"""Test FileHandle from open file with offset."""
with open(FILEHANDLE_NAME, 'rb') as fhandle:
with FileHandle(fhandle, offset=FILEHANDLE_OFFSET,
size=FILEHANDLE_LENGTH) as fh:
assert fh.name == "test_FileHandle.bin"
assert fh.is_file
assert_filehandle(fh, offset=FILEHANDLE_OFFSET)
assert not fhandle.closed
def test_filehandle_filehandle():
"""Test FileHandle from other FileHandle."""
with FileHandle(FILEHANDLE_NAME, 'rb') as fhandle:
with FileHandle(fhandle) as fh:
assert fh.name == "test_FileHandle.bin"
assert fh.is_file
assert_filehandle(fh)
assert not fhandle.closed
def test_filehandle_offset():
"""Test FileHandle from other FileHandle with offset."""
with FileHandle(FILEHANDLE_NAME, 'rb') as fhandle:
with FileHandle(fhandle, offset=FILEHANDLE_OFFSET,
size=FILEHANDLE_LENGTH) as fh:
assert fh.name == "test_FileHandle@333.bin"
assert fh.is_file
assert_filehandle(fh, offset=FILEHANDLE_OFFSET)
assert not fhandle.closed
def test_filehandle_reopen():
"""Test FileHandle close and open."""
try:
fh = FileHandle(FILEHANDLE_NAME)
assert not fh.closed
assert fh.is_file
fh.close()
assert fh.closed
fh.open()
assert not fh.closed
assert fh.is_file
assert fh.name == "test_FileHandle.bin"
assert_filehandle(fh)
finally:
fh.close()
def test_filehandle_unc_path():
"""Test FileHandle from UNC path."""
with FileHandle(r"\\localhost\Data\Data\test_FileHandle.bin") as fh:
assert fh.name == "test_FileHandle.bin"
assert fh.dirname == "\\\\localhost\\Data\\Data"
assert_filehandle(fh)
###############################################################################
# Test reading specific files
if not SKIP_EXTENDED:
TIGER_FILES = (data_file('Tigers/be/*.tif') +
data_file('Tigers/le/*.tif') +
data_file('Tigers/bigtiff-be/*.tif') +
data_file('Tigers/bigtiff-le/*.tif')
)
TIGER_IDS = ['-'.join(f.split(os.path.sep)[-2:]
).replace('-tiger', '').replace('.tif', '')
for f in TIGER_FILES]
@pytest.mark.skipif(SKIP_EXTENDED, reason='many tests')
@pytest.mark.parametrize('fname', TIGER_FILES, ids=TIGER_IDS)
def test_read_tigers(fname):
"""Test tiger images from GraphicsMagick."""
with TiffFile(fname) as tif:
byteorder = {'le': '<', 'be': '>'}[os.path.split(fname)[0][-2:]]
databits = int(fname.rsplit('.tif')[0][-2:])
# assert file properties
assert_file_flags(tif)
assert tif.byteorder == byteorder
assert tif.is_bigtiff == ('bigtiff' in fname)
assert len(tif.pages) == 1
# assert page properties
page = tif.pages[0]
assert_page_flags(page)
assert page.tags['DocumentName'].value == os.path.basename(fname)
assert page.imagewidth == 73
assert page.imagelength == 76
assert page.bitspersample == databits
assert (page.photometric == RGB) == ('rgb' in fname)
assert (page.photometric == PALETTE) == ('palette' in fname)
assert page.is_tiled == ('tile' in fname)
assert (page.planarconfig == CONTIG) == ('planar' not in fname)
if 'minisblack' in fname:
assert page.photometric == MINISBLACK
# float24 not supported
if 'float' in fname and databits == 24:
with pytest.raises(ValueError):
data = tif.asarray()
return
# assert data shapes
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
# if 'palette' in fname:
# shape = (76, 73, 3)
if 'rgb' in fname:
if 'planar' in fname:
shape = (3, 76, 73)
else:
shape = (76, 73, 3)
elif 'separated' in fname:
if 'planar' in fname:
shape = (4, 76, 73)
else:
shape = (76, 73, 4)
else:
shape = (76, 73)
assert data.shape == shape
# assert data types
if 'float' in fname:
dtype = 'float%i' % databits
# elif 'palette' in fname:
# dtype = 'uint16'
elif databits == 1:
dtype = 'bool'
elif databits <= 8:
dtype = 'uint8'
elif databits <= 16:
dtype = 'uint16'
elif databits <= 32:
dtype = 'uint32'
elif databits <= 64:
dtype = 'uint64'
assert data.dtype.name == dtype
assert__str__(tif)
def test_read_exif_paint():
"""Test read EXIF tags."""
fname = data_file('exif/paint.tif')
with TiffFile(fname) as tif:
exif = tif.pages[0].tags['ExifTag'].value
assert exif['ColorSpace'] == 65535
assert exif['ExifVersion'] == '0230'
assert exif['UserComment'] == 'paint'
assert__str__(tif)
def test_read_hopper_2bit():
"""Test read 2-bit, fillorder=lsb2msb."""
# https://github.com/python-pillow/Pillow/pull/1789
fname = data_file('hopper/hopper2.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == MINISBLACK
assert not page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 128
assert page.imagelength == 128
assert page.bitspersample == 2
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (128, 128)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
assert series.offset is None
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (128, 128)
assert data[50, 63] == 3
assert__str__(tif)
# reversed
fname = data_file('hopper/hopper2R.tif')
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.photometric == MINISBLACK
assert page.fillorder == LSB2MSB
assert_array_equal(tif.asarray(), data)
assert__str__(tif)
# inverted
fname = data_file('hopper/hopper2I.tif')
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.photometric == MINISWHITE
assert_array_equal(tif.asarray(), 3-data)
assert__str__(tif)
# inverted and reversed
fname = data_file('hopper/hopper2IR.tif')
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.photometric == MINISWHITE
assert_array_equal(tif.asarray(), 3-data)
assert__str__(tif)
def test_read_hopper_4bit():
"""Test read 4-bit, fillorder=lsb2msb."""
# https://github.com/python-pillow/Pillow/pull/1789
fname = data_file('hopper/hopper4.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == MINISBLACK
assert not page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 128
assert page.imagelength == 128
assert page.bitspersample == 4
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (128, 128)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
assert series.offset is None
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (128, 128)
assert data[50, 63] == 13
# reversed
fname = data_file('hopper/hopper4R.tif')
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.photometric == MINISBLACK
assert page.fillorder == LSB2MSB
assert_array_equal(tif.asarray(), data)
assert__str__(tif)
# inverted
fname = data_file('hopper/hopper4I.tif')
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.photometric == MINISWHITE
assert_array_equal(tif.asarray(), 15-data)
assert__str__(tif)
# inverted and reversed
fname = data_file('hopper/hopper4IR.tif')
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.photometric == MINISWHITE
assert_array_equal(tif.asarray(), 15-data)
assert__str__(tif)
def test_read_lsb2msb():
"""Test read fillorder=lsb2msb, 2 series."""
# http://lists.openmicroscopy.org.uk/pipermail/ome-users
# /2015-September/005635.html
fname = data_file('test_lsb2msb.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 2
assert len(tif.series) == 2
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 7100
assert page.imagelength == 4700
assert page.bitspersample == 16
assert page.samplesperpixel == 3
page = tif.pages[1]
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 7100
assert page.imagelength == 4700
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (4700, 7100, 3)
assert series.dtype.name == 'uint16'
assert series.axes == 'YXS'
assert series.offset is None
series = tif.series[1]
assert series.shape == (4700, 7100)
assert series.dtype.name == 'uint16'
assert series.axes == 'YX'
assert series.offset is None
# assert data
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (4700, 7100, 3)
assert data[2350, 3550, 1] == 60457
data = tif.asarray(series=1)
assert isinstance(data, numpy.ndarray)
assert data.shape == (4700, 7100)
assert data[2350, 3550] == 56341
assert__str__(tif)
def test_read_predictor3():
"""Test read floating point horizontal differencing by OpenImageIO."""
fname = data_file('predictor3.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_reduced
assert not page.is_tiled
assert page.compression == ADOBE_DEFLATE
assert page.imagewidth == 1500
assert page.imagelength == 1500
assert page.bitspersample == 32
assert page.samplesperpixel == 4
# assert series properties
series = tif.series[0]
assert series.shape == (1500, 1500, 4)
assert series.dtype.name == 'float32'
assert series.axes == 'YXS'
# assert data
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (1500, 1500, 4)
assert data.dtype.name == 'float32'
assert tuple(data[750, 750]) == (0., 0., 0., 1.)
assert__str__(tif)
def test_read_gimp16():
"""Test read uint16 with horizontal predictor by GIMP."""
fname = data_file('GIMP/gimp16.tiff')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.compression == ADOBE_DEFLATE
assert page.photometric == RGB
assert page.imagewidth == 333
assert page.imagelength == 231
assert page.samplesperpixel == 3
assert page.predictor == 2
image = tif.asarray()
assert tuple(image[110, 110]) == (23308, 17303, 41160)
assert__str__(tif)
def test_read_gimp32():
"""Test read float32 with horizontal predictor by GIMP."""
fname = data_file('GIMP/gimp32.tiff')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.compression == ADOBE_DEFLATE
assert page.photometric == RGB
assert page.imagewidth == 333
assert page.imagelength == 231
assert page.samplesperpixel == 3
assert page.predictor == 2
image = tif.asarray()
assert_array_almost_equal(
image[110, 110], (0.35565534, 0.26402164, 0.6280674))
assert__str__(tif)
def test_read_iss_vista():
"""Test read bogus imagedepth tag by ISS Vista."""
fname = data_file('iss/10um_beads_14stacks_ch1.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 14
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_reduced
assert not page.is_tiled
assert page.compression == NONE
assert page.imagewidth == 256
assert page.imagelength == 256
assert page.tags['ImageDepth'].value == 14 # bogus
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (14, 256, 256)
assert series.dtype.name == 'int16'
assert series.axes == 'IYX' # ZYX
assert__str__(tif)
def test_read_vips():
"""Test read 47x641 RGB, bigtiff, pyramid, tiled, produced by VIPS."""
fname = data_file('vips.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 4
assert len(tif.series) == 4
# assert page properties
page = tif.pages[0]
assert not page.is_reduced
assert page.is_tiled
assert page.compression == ADOBE_DEFLATE
assert page.imagewidth == 641
assert page.imagelength == 347
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (347, 641, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
series = tif.series[3]
page = series.pages[0]
assert page.is_reduced
assert page.is_tiled
assert series.shape == (43, 80, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (347, 641, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[132, 361]) == (114, 233, 58)
assert__str__(tif)
def test_read_sgi_depth():
"""Test read 128x128x128, float32, tiled SGI."""
fname = data_file('sgi/sgi_depth.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_sgi
assert page.planarconfig == CONTIG
assert page.is_tiled
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 128
assert page.imagelength == 128
assert page.imagedepth == 128
assert page.tilewidth == 128
assert page.tilelength == 128
assert page.tiledepth == 1
assert page.bitspersample == 32
assert page.samplesperpixel == 1
assert page.tags['Software'].value == (
'MFL MeVis File Format Library, TIFF Module')
# assert series properties
series = tif.series[0]
assert series.shape == (128, 128, 128)
assert series.dtype.name == 'float32'
assert series.axes == 'ZYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (128, 128, 128)
assert data.dtype.name == 'float32'
assert data[64, 64, 64] == 0.0
assert__str__(tif)
def test_read_oxford():
"""Test read 601x81, uint8, PackBits."""
fname = data_file('oxford.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 601
assert page.imagelength == 81
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (3, 81, 601)
assert series.dtype == 'uint8'
assert series.axes == 'SYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 81, 601)
assert data.dtype.name == 'uint8'
assert data[1, 24, 49] == 191
assert__str__(tif)
def test_read_cramps():
"""Test 800x607 uint8, PackBits."""
fname = data_file('cramps.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.compression == PACKBITS
assert page.photometric == MINISWHITE
assert page.imagewidth == 800
assert page.imagelength == 607
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (607, 800)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (607, 800)
assert data.dtype.name == 'uint8'
assert data[273, 426] == 34
assert__str__(tif)
def test_read_cramps_tile():
"""Test read 800x607 uint8, raw, sgi, tiled."""
fname = data_file('cramps-tile.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_tiled
assert page.is_sgi
assert page.compression == NONE
assert page.photometric == MINISWHITE
assert page.imagewidth == 800
assert page.imagelength == 607
assert page.imagedepth == 1
assert page.tilewidth == 256
assert page.tilelength == 256
assert page.tiledepth == 1
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (607, 800)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (607, 800)
assert data.dtype.name == 'uint8'
assert data[273, 426] == 34
assert__str__(tif)
def test_read_jello():
"""Test read 256x192x3, uint16, palette, PackBits."""
fname = data_file('jello.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == PALETTE
assert page.planarconfig == CONTIG
assert page.compression == PACKBITS
assert page.imagewidth == 256
assert page.imagelength == 192
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (192, 256)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
# assert data
data = page.asrgb(uint8=False)
assert isinstance(data, numpy.ndarray)
assert data.shape == (192, 256, 3)
assert data.dtype.name == 'uint16'
assert tuple(data[100, 140, :]) == (48895, 65279, 48895)
assert__str__(tif)
def test_read_django():
"""Test read 3x480x320, uint16, palette, raw."""
fname = data_file('django.tiff')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == PALETTE
assert page.planarconfig == CONTIG
assert page.compression == NONE
assert page.imagewidth == 320
assert page.imagelength == 480
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (480, 320)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
# assert data
data = page.asrgb(uint8=False)
assert isinstance(data, numpy.ndarray)
assert data.shape == (480, 320, 3)
assert data.dtype.name == 'uint16'
assert tuple(data[64, 64, :]) == (65535, 52171, 63222)
assert__str__(tif)
def test_read_quad_lzw():
"""Test read 384x512 RGB uint8 old style LZW."""
fname = data_file('quad-lzw.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_tiled
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 512
assert page.imagelength == 384
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (384, 512, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (384, 512, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[309, 460, :]) == (0, 163, 187)
def test_read_quad_lzw_le():
"""Test read 384x512 RGB uint8 LZW."""
fname = data_file('quad-lzw_le.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert not page.is_tiled
assert page.compression == LZW
assert page.imagewidth == 512
assert page.imagelength == 384
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (384, 512, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (384, 512, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[309, 460, :]) == (0, 163, 187)
def test_read_quad_tile():
"""Test read 384x512 RGB uint8 LZW tiled."""
# Strips and tiles defined in same page
fname = data_file('quad-tile.tif')
with TiffFile(fname) as tif:
assert__str__(tif)
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.is_tiled
assert page.compression == LZW
assert page.imagewidth == 512
assert page.imagelength == 384
assert page.imagedepth == 1
assert page.tilewidth == 128
assert page.tilelength == 128
assert page.tiledepth == 1
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (384, 512, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray()
# assert 'invalid tile data (49153,) (1, 128, 128, 3)' in caplog.text
assert isinstance(data, numpy.ndarray)
assert data.shape == (384, 512, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[309, 460, :]) == (0, 163, 187)
def test_read_strike():
"""Test read 256x200 RGBA uint8 LZW."""
fname = data_file('strike.tif')
with TiffFile(fname) as tif:
assert__str__(tif)
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 256
assert page.imagelength == 200
assert page.bitspersample == 8
assert page.samplesperpixel == 4
assert page.extrasamples == ASSOCALPHA
# assert series properties
series = tif.series[0]
assert series.shape == (200, 256, 4)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (200, 256, 4)
assert data.dtype.name == 'uint8'
assert tuple(data[65, 139, :]) == (43, 34, 17, 91)
assert__str__(tif)
def test_read_pygame_icon():
"""Test read 128x128 RGBA uint8 PackBits."""
fname = data_file('pygame_icon.tiff')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.compression == PACKBITS
assert page.imagewidth == 128
assert page.imagelength == 128
assert page.bitspersample == 8
assert page.samplesperpixel == 4
assert page.extrasamples == UNASSALPHA # ?
assert page.tags['Software'].value == 'QuickTime 5.0.5'
assert page.tags['HostComputer'].value == 'MacOS 10.1.2'
assert page.tags['DateTime'].value == '2001:12:21 04:34:56'
# assert series properties
series = tif.series[0]
assert series.shape == (128, 128, 4)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (128, 128, 4)
assert data.dtype.name == 'uint8'
assert tuple(data[22, 112, :]) == (100, 99, 98, 132)
assert__str__(tif)
def test_read_rgba_wo_extra_samples():
"""Test read 1065x785 RGBA uint8."""
fname = data_file('rgba_wo_extra_samples.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 1065
assert page.imagelength == 785
assert page.bitspersample == 8
assert page.samplesperpixel == 4
# with self.assertRaises(AttributeError):
# page.extrasamples
# assert series properties
series = tif.series[0]
assert series.shape == (785, 1065, 4)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (785, 1065, 4)
assert data.dtype.name == 'uint8'
assert tuple(data[560, 412, :]) == (60, 92, 74, 255)
assert__str__(tif)
def test_read_rgb565():
"""Test read 64x64 RGB uint8 5,6,5 bitspersample."""
fname = data_file('rgb565.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.compression == NONE
assert page.imagewidth == 64
assert page.imagelength == 64
assert page.bitspersample == (5, 6, 5)
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (64, 64, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (64, 64, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[56, 32, :]) == (239, 243, 247)
assert__str__(tif)
def test_read_vigranumpy():
"""Test read 4 series in 6 pages."""
fname = data_file('vigranumpy.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 6
assert len(tif.series) == 4
# assert series 0 properties
series = tif.series[0]
assert series.shape == (3, 20, 20)
assert series.dtype.name == 'uint8'
assert series.axes == 'IYX'
page = series.pages[0]
assert page.compression == LZW
assert page.imagewidth == 20
assert page.imagelength == 20
assert page.bitspersample == 8
assert page.samplesperpixel == 1
data = tif.asarray(series=0)
assert data.shape == (3, 20, 20)
assert data.dtype.name == 'uint8'
assert tuple(data[:, 9, 9]) == (19, 90, 206)
# assert series 1 properties
series = tif.series[1]
assert series.shape == (10, 10, 3)
assert series.dtype.name == 'float32'
assert series.axes == 'YXS'
page = series.pages[0]
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 10
assert page.imagelength == 10
assert page.bitspersample == 32
assert page.samplesperpixel == 3
data = tif.asarray(series=1)
assert isinstance(data, numpy.ndarray)
assert data.shape == (10, 10, 3)
assert data.dtype.name == 'float32'
assert round(abs(data[9, 9, 1]-214.5733642578125), 7) == 0
# assert series 2 properties
series = tif.series[2]
assert series.shape == (20, 20, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
page = series.pages[0]
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 20
assert page.imagelength == 20
assert page.bitspersample == 8
assert page.samplesperpixel == 3
data = tif.asarray(series=2)
assert isinstance(data, numpy.ndarray)
assert data.shape == (20, 20, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[9, 9, :]) == (19, 90, 206)
# assert series 3 properties
series = tif.series[3]
assert series.shape == (10, 10)
assert series.dtype.name == 'float32'
assert series.axes == 'YX'
page = series.pages[0]
assert page.compression == LZW
assert page.imagewidth == 10
assert page.imagelength == 10
assert page.bitspersample == 32
assert page.samplesperpixel == 1
data = tif.asarray(series=3)
assert isinstance(data, numpy.ndarray)
assert data.shape == (10, 10)
assert data.dtype.name == 'float32'
assert round(abs(data[9, 9]-223.1648712158203), 7) == 0
assert__str__(tif)
def test_read_freeimage():
"""Test read 3 series in 3 pages RGB LZW."""
fname = data_file('freeimage.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 3
assert len(tif.series) == 3
for i, shape in enumerate(((100, 600), (379, 574), (689, 636))):
series = tif.series[i]
shape = shape + (3, )
assert series.shape == shape
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
page = series.pages[0]
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == shape[1]
assert page.imagelength == shape[0]
assert page.bitspersample == 8
assert page.samplesperpixel == 3
data = tif.asarray(series=i)
assert isinstance(data, numpy.ndarray)
assert data.shape == shape
assert data.dtype.name == 'uint8'
assert__str__(tif)
def test_read_12bit():
"""Test read 12 bit images."""
fname = data_file('12bit.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1000
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 1024
assert page.imagelength == 304
assert page.bitspersample == 12
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (1000, 304, 1024)
assert series.dtype.name == 'uint16'
assert series.axes == 'IYX'
# assert data
data = tif.asarray(478)
assert isinstance(data, numpy.ndarray)
assert data.shape == (304, 1024)
assert data.dtype.name == 'uint16'
assert round(abs(data[138, 475]-40), 7) == 0
assert__str__(tif, 0)
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
def test_read_lzw_large_buffer():
"""Test read LZW compression which requires large buffer."""
# https://github.com/groupdocs-viewer/GroupDocs.Viewer-for-.NET-MVC-App
# /issues/35
fname = data_file('lzw/lzw_large_buffer.tiff')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.compression == LZW
assert page.imagewidth == 5104
assert page.imagelength == 8400
assert page.bitspersample == 8
assert page.samplesperpixel == 4
# assert data
image = page.asarray()
assert image.shape == (8400, 5104, 4)
assert image.dtype == 'uint8'
image = tif.asarray()
assert image.shape == (8400, 5104, 4)
assert image.dtype == 'uint8'
assert image[4200, 2550, 0] == 0
assert image[4200, 2550, 3] == 255
assert__str__(tif)
def test_read_lzw_ycbcr_subsampling():
"""Test fail LZW compression with subsampling."""
fname = data_file('lzw/lzw_ycbcr_subsampling.tif')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.compression == LZW
assert page.photometric == YCBCR
assert page.planarconfig == CONTIG
assert page.imagewidth == 39
assert page.imagelength == 39
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert data
with pytest.raises(NotImplementedError):
page.asarray()
assert__str__(tif)
def test_read_jpeg_baboon():
"""Test JPEG compression."""
# test that jpeg compression is supported
fname = data_file('baboon.tiff')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert 'JPEGTables' in page.tags
assert not page.is_reduced
assert not page.is_tiled
assert page.compression == JPEG
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 8
# assert series properties
series = tif.series[0]
assert series.shape == (512, 512, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
# with pytest.raises((ValueError, NotImplementedError)):
tif.asarray()
assert__str__(tif)
def test_read_jpeg_ycbcr():
"""Test read YCBCR JPEG is returned as RGB."""
fname = data_file('jpeg/jpeg_ycbcr.tiff')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.compression == JPEG
assert page.photometric == YCBCR
assert page.planarconfig == CONTIG
assert page.imagewidth == 128
assert page.imagelength == 80
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert data
image = tif.asarray()
assert image.shape == (80, 128, 3)
assert image.dtype == 'uint8'
assert tuple(image[50, 50, :]) == (177, 149, 210)
# YCBCR (164, 154, 137)
assert__str__(tif)
def test_read_jpeg12_mandril():
"""Test read JPEG 12-bit compression."""
# JPEG 12-bit
fname = data_file('jpeg/jpeg12_mandril.tif')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.compression == JPEG
assert page.photometric == YCBCR
assert page.imagewidth == 512
assert page.imagelength == 480
assert page.bitspersample == 12
assert page.samplesperpixel == 3
# assert data
image = tif.asarray()
assert image.shape == (480, 512, 3)
assert image.dtype == 'uint16'
assert tuple(image[128, 128, :]) == (1685, 1859, 1376)
# YCBCR (1752, 1836, 2000)
assert__str__(tif)
def test_read_aperio_j2k():
"""Test read SVS slide with J2K compression."""
fname = data_file('slides/CMU-1-JP2K-33005.tif')
with TiffFile(fname) as tif:
assert tif.is_svs
assert len(tif.pages) == 6
page = tif.pages[0]
assert page.compression == APERIO_JP2000_RGB
assert page.photometric == RGB
assert page.planarconfig == CONTIG
assert page.shape == (32893, 46000, 3)
assert page.dtype == 'uint8'
page = tif.pages[1]
assert page.compression == JPEG
assert page.photometric == RGB
assert page.planarconfig == CONTIG
assert page.shape == (732, 1024, 3)
assert page.dtype == 'uint8'
page = tif.pages[2]
assert page.compression == APERIO_JP2000_RGB
assert page.photometric == RGB
assert page.planarconfig == CONTIG
assert page.shape == (8223, 11500, 3)
assert page.dtype == 'uint8'
page = tif.pages[3]
assert page.compression == APERIO_JP2000_RGB
assert page.photometric == RGB
assert page.planarconfig == CONTIG
assert page.shape == (2055, 2875, 3)
assert page.dtype == 'uint8'
page = tif.pages[4]
assert page.is_reduced
assert page.compression == LZW
assert page.photometric == RGB
assert page.planarconfig == CONTIG
assert page.shape == (463, 387, 3)
assert page.dtype == 'uint8'
page = tif.pages[5]
assert page.is_reduced
assert page.compression == JPEG
assert page.photometric == RGB
assert page.planarconfig == CONTIG
assert page.shape == (431, 1280, 3)
assert page.dtype == 'uint8'
# assert data
image = tif.pages[3].asarray()
assert image.shape == (2055, 2875, 3)
assert image.dtype == 'uint8'
assert image[512, 1024, 0] == 246
assert image[512, 1024, 1] == 245
assert image[512, 1024, 2] == 245
assert__str__(tif)
def test_read_lzma():
"""Test read LZMA compression."""
# 512x512, uint8, lzma compression
fname = data_file('lzma.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.compression == LZMA
assert page.photometric == MINISBLACK
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (512, 512)
assert series.dtype == 'uint8'
assert series.axes == 'YX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (512, 512)
assert data.dtype.name == 'uint8'
assert data[273, 426] == 151
assert__str__(tif)
def test_read_webp():
"""Test read WebP compression."""
fname = data_file('GDAL/tif_webp.tif')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.compression == WEBP
assert page.photometric == RGB
assert page.planarconfig == CONTIG
assert page.imagewidth == 50
assert page.imagelength == 50
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert data
image = tif.asarray()
assert image.shape == (50, 50, 3)
assert image.dtype == 'uint8'
assert image[25, 25, 0] == 92
assert image[25, 25, 1] == 122
assert image[25, 25, 2] == 37
assert__str__(tif)
def test_read_zstd():
"""Test read ZStd compression."""
fname = data_file('GDAL/byte_zstd.tif')
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.compression == ZSTD
assert page.photometric == MINISBLACK
assert page.planarconfig == CONTIG
assert page.imagewidth == 20
assert page.imagelength == 20
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert data
image = tif.asarray() # fails with imagecodecs <= 2018.11.8
assert image.shape == (20, 20)
assert image.dtype == 'uint8'
assert image[18, 1] == 247
assert__str__(tif)
# Test special cases produced by photoshop
def test_read_lena_be_f16_contig():
"""Test read big endian float16 horizontal differencing."""
fname = data_file('PS/lena_be_f16_contig.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_reduced
assert not page.is_tiled
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (512, 512, 3)
assert series.dtype.name == 'float16'
assert series.axes == 'YXS'
# assert data
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (512, 512, 3)
assert data.dtype.name == 'float16'
assert_array_almost_equal(data[256, 256],
(0.4563, 0.052856, 0.064819))
assert__str__(tif)
def test_read_lena_be_f16_lzw_planar():
"""Test read big endian, float16, LZW, horizontal differencing."""
fname = data_file('PS/lena_be_f16_lzw_planar.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_reduced
assert not page.is_tiled
assert page.compression == LZW
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (3, 512, 512)
assert series.dtype.name == 'float16'
assert series.axes == 'SYX'
# assert data
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 512, 512)
assert data.dtype.name == 'float16'
assert_array_almost_equal(data[:, 256, 256],
(0.4563, 0.052856, 0.064819))
assert__str__(tif)
def test_read_lena_be_f32_deflate_contig():
"""Test read big endian, float32 horizontal differencing, deflate."""
fname = data_file('PS/lena_be_f32_deflate_contig.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_reduced
assert not page.is_tiled
assert page.compression == ADOBE_DEFLATE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 32
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (512, 512, 3)
assert series.dtype.name == 'float32'
assert series.axes == 'YXS'
# assert data
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (512, 512, 3)
assert data.dtype.name == 'float32'
assert_array_almost_equal(data[256, 256],
(0.456386, 0.052867, 0.064795))
assert__str__(tif)
def test_read_lena_le_f32_lzw_planar():
"""Test read little endian, LZW, float32 horizontal differencing."""
fname = data_file('PS/lena_le_f32_lzw_planar.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_reduced
assert not page.is_tiled
assert page.compression == LZW
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 32
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (3, 512, 512)
assert series.dtype.name == 'float32'
assert series.axes == 'SYX'
# assert data
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 512, 512)
assert data.dtype.name == 'float32'
assert_array_almost_equal(data[:, 256, 256],
(0.456386, 0.052867, 0.064795))
assert__str__(tif)
def test_read_lena_be_rgb48():
"""Test read little endian, uint16, LZW."""
fname = data_file('PS/lena_be_rgb48.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert not page.is_reduced
assert not page.is_tiled
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (512, 512, 3)
assert series.dtype.name == 'uint16'
assert series.axes == 'YXS'
# assert data
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (512, 512, 3)
assert data.dtype.name == 'uint16'
assert_array_equal(data[256, 256], (46259, 16706, 18504))
assert__str__(tif)
# Test large images
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
@pytest.mark.skipif(SKIP_HUGE, reason='huge image')
def test_read_huge_ps5_memmap():
"""Test read 30000x30000 float32 contiguous."""
fname = data_file('large/huge_ps5.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous == (21890, 3600000000)
assert not page.is_memmappable # data not aligned!
assert page.compression == NONE
assert page.imagewidth == 30000
assert page.imagelength == 30000
assert page.bitspersample == 32
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (30000, 30000)
assert series.dtype.name == 'float32'
assert series.axes == 'YX'
# assert data
data = tif.asarray(out='memmap') # memmap in a temp file
assert isinstance(data, numpy.core.memmap)
assert data.shape == (30000, 30000)
assert data.dtype.name == 'float32'
assert data[6597, 8135] == 0.008780896663665771
del data
assert not tif.filehandle.closed
assert__str__(tif)
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
@pytest.mark.skipif(SKIP_HUGE, reason='huge image')
def test_read_movie():
"""Test read 30000 pages, uint16."""
fname = data_file('large/movie.tif')
with TiffFile(fname, movie=True) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 30000
assert len(tif.series) == 1
# assert series properties
series = tif.series[0]
assert series.shape == (30000, 64, 64)
assert series.dtype.name == 'uint16'
assert series.axes == 'IYX'
# assert page properties
page = tif.pages[-1]
assert isinstance(page, TiffFrame)
assert page.shape == (64, 64)
# assert data
data = tif.pages[29999].asarray() # last frame
assert isinstance(data, numpy.ndarray)
assert data.shape == (64, 64)
assert data.dtype.name == 'uint16'
assert data[32, 32] == 460
del data
# read selected pages
# https://github.com/blink1073/tifffile/issues/51
data = tif.asarray(key=[31, 999, 29999])
assert data.shape == (3, 64, 64)
assert data[2, 32, 32] == 460
del data
assert__str__(tif, 0)
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
@pytest.mark.skipif(IS_32BIT, reason='might segfault due to low memory')
@pytest.mark.skipif(SKIP_HUGE, reason='huge image')
def test_read_100000_pages_movie():
"""Test read 100000x64x64 big endian in memory."""
fname = data_file('large/100000_pages.tif')
with TiffFile(fname, movie=True) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 100000
assert len(tif.series) == 1
# assert series properties
series = tif.series[0]
assert series.shape == (100000, 64, 64)
assert series.dtype.name == 'uint16'
assert series.axes == 'TYX'
# assert page properties
page = tif.pages[100]
assert isinstance(page, TiffFrame)
assert page.shape == (64, 64)
page = tif.pages[0]
assert page.imagewidth == 64
assert page.imagelength == 64
assert page.bitspersample == 16
assert page.is_contiguous
# assert ImageJ tags
tags = tif.imagej_metadata
assert tags['ImageJ'] == '1.48g'
assert round(abs(tags['max']-119.0), 7) == 0
assert round(abs(tags['min']-86.0), 7) == 0
# assert data
data = tif.asarray()
assert data.shape == (100000, 64, 64)
assert data.dtype.name == 'uint16'
assert round(abs(data[7310, 25, 25]-100), 7) == 0
del data
assert__str__(tif, 0)
@pytest.mark.skipif(SKIP_EXTENDED, reason='huge image')
def test_read_movie_memmap():
"""Test read 30000 pages memory-mapped."""
fname = data_file('large/movie.tif')
with TiffFile(fname) as tif:
# assert data
data = tif.asarray(out='memmap')
assert isinstance(data, numpy.core.memmap)
assert data.shape == (30000, 64, 64)
assert data.dtype.name == 'uint16'
assert data[29999, 32, 32] == 460
del data
assert not tif.filehandle.closed
assert__str__(tif, 0)
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
def test_read_chart_bl():
"""Test read 13228x18710, 1 bit, no bitspersample tag."""
fname = data_file('large/chart_bl.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.compression == NONE
assert page.imagewidth == 13228
assert page.imagelength == 18710
assert page.bitspersample == 1
assert page.samplesperpixel == 1
assert page.rowsperstrip == 18710
# assert series properties
series = tif.series[0]
assert series.shape == (18710, 13228)
assert series.dtype.name == 'bool'
assert series.axes == 'YX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (18710, 13228)
assert data.dtype.name == 'bool'
assert data[0, 0] is numpy.bool_(True)
assert data[5000, 5000] is numpy.bool_(False)
assert__str__(tif)
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
@pytest.mark.skipif(SKIP_HUGE, reason='huge image')
def test_read_srtm_20_13():
"""Test read 6000x6000 int16 GDAL."""
fname = data_file('large/srtm_20_13.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 6000
assert page.imagelength == 6000
assert page.bitspersample == 16
assert page.samplesperpixel == 1
assert page.tags['GDAL_NODATA'].value == "-32768"
assert page.tags['GeoAsciiParamsTag'].value == "WGS 84|"
# assert series properties
series = tif.series[0]
assert series.shape == (6000, 6000)
assert series.dtype.name == 'int16'
assert series.axes == 'YX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (6000, 6000)
assert data.dtype.name == 'int16'
assert data[5199, 5107] == 1019
assert data[0, 0] == -32768
del data
assert__str__(tif)
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
def test_read_gel_scan():
"""Test read 6976x4992x3 uint8 LZW."""
fname = data_file('large/gel_1-scan2.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 4992
assert page.imagelength == 6976
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (6976, 4992, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (6976, 4992, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[2229, 1080, :]) == (164, 164, 164)
del data
assert__str__(tif)
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
def test_read_caspian():
"""Test read 3x220x279 float64, RGB, deflate, GDAL."""
fname = data_file('caspian.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.planarconfig == SEPARATE
assert page.compression == DEFLATE
assert page.imagewidth == 279
assert page.imagelength == 220
assert page.bitspersample == 64
assert page.samplesperpixel == 3
assert page.tags['GDAL_METADATA'].value.startswith('')
# assert series properties
series = tif.series[0]
assert series.shape == (3, 220, 279)
assert series.dtype.name == 'float64'
assert series.axes == 'SYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 220, 279)
assert data.dtype.name == 'float64'
assert round(abs(data[2, 100, 140]-353.0), 7) == 0
assert__str__(tif)
def test_read_subifds_array():
"""Test read SubIFDs."""
fname = data_file('Tiff4J/IFD struct/SubIFDs array E.tif')
with TiffFile(fname) as tif:
assert len(tif.series) == 1
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.photometric == RGB
assert page.imagewidth == 2000
assert page.imagelength == 1500
assert page.bitspersample == 8
assert page.samplesperpixel == 3
assert page.tags['SubIFDs'].value == (14760220, 18614796,
19800716, 18974964)
# assert subifds
assert len(page.pages) == 4
page = tif.pages[0].pages[0]
assert page.photometric == RGB
assert page.imagewidth == 1600
assert page.imagelength == 1200
page = tif.pages[0].pages[1]
assert page.photometric == RGB
assert page.imagewidth == 1200
assert page.imagelength == 900
page = tif.pages[0].pages[2]
assert page.photometric == RGB
assert page.imagewidth == 800
assert page.imagelength == 600
page = tif.pages[0].pages[3]
assert page.photometric == RGB
assert page.imagewidth == 400
assert page.imagelength == 300
# assert data
image = page.asarray()
assert image.shape == (300, 400, 3)
assert image.dtype == 'uint8'
assert tuple(image[124, 292]) == (236, 109, 95)
assert__str__(tif)
def test_read_subifd4():
"""Test read BigTIFFSubIFD4."""
fname = data_file('TwelveMonkeys/bigtiff/BigTIFFSubIFD4.tif')
with TiffFile(fname) as tif:
assert len(tif.series) == 1
assert len(tif.pages) == 2
page = tif.pages[0]
assert page.photometric == RGB
assert page.imagewidth == 64
assert page.imagelength == 64
assert page.bitspersample == 8
assert page.samplesperpixel == 3
assert page.tags['SubIFDs'].value == (3088,)
# assert subifd
page = page.pages[0]
assert page.photometric == RGB
assert page.imagewidth == 32
assert page.imagelength == 32
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert data
image = page.asarray()
assert image.shape == (32, 32, 3)
assert image.dtype == 'uint8'
assert image[15, 15, 0] == 255
assert image[16, 16, 2] == 0
assert__str__(tif)
def test_read_subifd8():
"""Test read BigTIFFSubIFD8."""
fname = data_file('TwelveMonkeys/bigtiff/BigTIFFSubIFD8.tif')
with TiffFile(fname) as tif:
assert len(tif.series) == 1
assert len(tif.pages) == 2
page = tif.pages[0]
assert page.photometric == RGB
assert page.imagewidth == 64
assert page.imagelength == 64
assert page.bitspersample == 8
assert page.samplesperpixel == 3
assert page.tags['SubIFDs'].value == (3088,)
# assert subifd
page = page.pages[0]
assert page.photometric == RGB
assert page.imagewidth == 32
assert page.imagelength == 32
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert data
image = page.asarray()
assert image.shape == (32, 32, 3)
assert image.dtype == 'uint8'
assert image[15, 15, 0] == 255
assert image[16, 16, 2] == 0
assert__str__(tif)
# @pytest.mark.skipif(SKIP_HUGE, reason='huge image')
@pytest.mark.skipif(IS_32BIT, reason='not enough memory')
def test_read_lsm_mosaic():
"""Test read LSM: PTZCYX (Mosaic mode), two areas, 32 samples, >4 GB."""
# LSM files are little endian with two series, one of which is reduced RGB
# Tags may be unordered or contain bogus values
fname = data_file('lsm/Twoareas_Zstacks54slices_3umintervals_5cycles.lsm')
with TiffFile(fname) as tif:
assert tif.is_lsm
assert tif.byteorder == '<'
assert len(tif.pages) == 1080
assert len(tif.series) == 2
# assert page properties
page = tif.pages[0]
assert page.is_lsm
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 32
# assert strip offsets are corrected
page = tif.pages[-2]
assert page.dataoffsets[0] == 9070895981
# assert series properties
series = tif.series[0]
assert series.shape == (2, 5, 54, 32, 512, 512)
assert series.dtype.name == 'uint16'
assert series.axes == 'PTZCYX'
if 1:
series = tif.series[1]
assert series.shape == (2, 5, 54, 3, 128, 128)
assert series.dtype.name == 'uint8'
assert series.axes == 'PTZCYX'
# assert lsm_info tags
tags = tif.lsm_metadata
assert tags['DimensionX'] == 512
assert tags['DimensionY'] == 512
assert tags['DimensionZ'] == 54
assert tags['DimensionTime'] == 5
assert tags['DimensionChannels'] == 32
# assert lsm_scan_info tags
tags = tif.lsm_metadata['ScanInformation']
assert tags['ScanMode'] == 'Stack'
assert tags['User'] == 'lfdguest1'
assert__str__(tif, 0)
@pytest.mark.skipif(SKIP_HUGE, reason='huge image')
def test_read_lsm_carpet():
"""Test read LSM: ZCTYX (time series x-y), 72000 pages."""
# reads very slowly, ensure colormap is not applied
fname = data_file('lsm/Cardarelli_carpet_3.lsm')
with TiffFile(fname) as tif:
assert tif.is_lsm
assert tif.byteorder == '<'
assert len(tif.pages) == 72000
assert len(tif.series) == 2
# assert page properties
page = tif.pages[0]
assert page.is_lsm
assert 'ColorMap' in page.tags
assert page.photometric == PALETTE
assert page.compression == NONE
assert page.imagewidth == 32
assert page.imagelength == 10
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (1, 1, 36000, 10, 32)
assert series.dtype.name == 'uint8'
assert series.axes == 'ZCTYX'
if 1:
series = tif.series[1]
assert series.shape == (1, 1, 36000, 3, 40, 128)
assert series.dtype.name == 'uint8'
assert series.axes == 'ZCTCYX'
# assert lsm_info tags
tags = tif.lsm_metadata
assert tags['DimensionX'] == 32
assert tags['DimensionY'] == 10
assert tags['DimensionZ'] == 1
assert tags['DimensionTime'] == 36000
assert tags['DimensionChannels'] == 1
# assert lsm_scan_info tags
tags = tif.lsm_metadata['ScanInformation']
assert tags['ScanMode'] == 'Plane'
assert tags['User'] == 'LSM User'
assert__str__(tif, 0)
def test_read_lsm_take1():
"""Test read LSM: TCZYX (Plane mode), single image, uint8."""
fname = data_file('lsm/take1.lsm')
with TiffFile(fname) as tif:
assert tif.is_lsm
assert tif.byteorder == '<'
assert len(tif.pages) == 2
assert len(tif.series) == 2
# assert page properties
page = tif.pages[0]
assert page.is_lsm
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 8
assert page.samplesperpixel == 1
page = tif.pages[1]
assert page.is_reduced
assert page.photometric == RGB
assert page.planarconfig == SEPARATE
assert page.compression == NONE
assert page.imagewidth == 128
assert page.imagelength == 128
assert page.samplesperpixel == 3
assert page.bitspersample == 8
# assert series properties
series = tif.series[0]
assert series.shape == (1, 1, 1, 512, 512) # (512, 512)?
assert series.dtype.name == 'uint8'
assert series.axes == 'TCZYX'
if 1:
series = tif.series[1]
assert series.shape == (3, 128, 128)
assert series.dtype.name == 'uint8'
assert series.axes == 'CYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (1, 1, 1, 512, 512) # (512, 512)?
assert data.dtype.name == 'uint8'
assert data[..., 256, 256] == 101
if 1:
data = tif.asarray(series=1)
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 128, 128)
assert data.dtype.name == 'uint8'
assert tuple(data[..., 64, 64]) == (89, 89, 89)
# assert lsm_info tags
tags = tif.lsm_metadata
assert tags['DimensionX'] == 512
assert tags['DimensionY'] == 512
assert tags['DimensionZ'] == 1
assert tags['DimensionTime'] == 1
assert tags['DimensionChannels'] == 1
# assert lsm_scan_info tags
tags = tif.lsm_metadata['ScanInformation']
assert tags['ScanMode'] == 'Plane'
assert tags['User'] == 'LSM User'
assert len(tags['Tracks']) == 1
assert len(tags['Tracks'][0]['DataChannels']) == 1
track = tags['Tracks'][0]
assert track['DataChannels'][0]['Name'] == 'Ch1'
assert track['DataChannels'][0]['BitsPerSample'] == 8
assert len(track['IlluminationChannels']) == 1
assert track['IlluminationChannels'][0]['Name'] == '561'
assert track['IlluminationChannels'][0]['Wavelength'] == 561.0
assert__str__(tif)
def test_read_lsm_2chzt():
"""Test read LSM: ZCYX (Stack mode) uint8."""
fname = data_file('lsm/2chzt.lsm')
with TiffFile(fname) as tif:
assert tif.is_lsm
assert tif.byteorder == '<'
assert len(tif.pages) == 798
assert len(tif.series) == 2
# assert page properties
page = tif.pages[0]
assert page.is_lsm
assert page.is_contiguous
assert page.photometric == RGB
assert page.databytecounts[2] == 0 # no strip data
assert page.dataoffsets[2] == 242632 # bogus offset
assert page.compression == NONE
assert page.imagewidth == 400
assert page.imagelength == 300
assert page.bitspersample == 8
assert page.samplesperpixel == 2
page = tif.pages[1]
assert page.is_reduced
assert page.photometric == RGB
assert page.planarconfig == SEPARATE
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 128
assert page.imagelength == 96
assert page.samplesperpixel == 3
assert page.bitspersample == 8
# assert series properties
series = tif.series[0]
assert series.shape == (19, 21, 2, 300, 400)
assert series.dtype.name == 'uint8'
assert series.axes == 'TZCYX'
if 1:
series = tif.series[1]
assert series.shape == (19, 21, 3, 96, 128)
assert series.dtype.name == 'uint8'
assert series.axes == 'TZCYX'
# assert data
data = tif.asarray(out='memmap')
assert isinstance(data, numpy.core.memmap)
assert data.shape == (19, 21, 2, 300, 400)
assert data.dtype.name == 'uint8'
assert data[18, 20, 1, 199, 299] == 39
if 1:
data = tif.asarray(series=1)
assert isinstance(data, numpy.ndarray)
assert data.shape == (19, 21, 3, 96, 128)
assert data.dtype.name == 'uint8'
assert tuple(data[18, 20, :, 64, 96]) == (22, 22, 0)
del data
# assert lsm_info tags
tags = tif.lsm_metadata
assert tags['DimensionX'] == 400
assert tags['DimensionY'] == 300
assert tags['DimensionZ'] == 21
assert tags['DimensionTime'] == 19
assert tags['DimensionChannels'] == 2
# assert lsm_scan_info tags
tags = tif.lsm_metadata['ScanInformation']
assert tags['ScanMode'] == 'Stack'
assert tags['User'] == 'zjfhe'
assert len(tags['Tracks']) == 3
assert len(tags['Tracks'][0]['DataChannels']) == 1
track = tags['Tracks'][0]
assert track['DataChannels'][0]['Name'] == 'Ch3'
assert track['DataChannels'][0]['BitsPerSample'] == 8
assert len(track['IlluminationChannels']) == 6
assert track['IlluminationChannels'][5]['Name'] == '488'
assert track['IlluminationChannels'][5]['Wavelength'] == 488.0
assert__str__(tif, 0)
def test_read_lsm_earpax2isl11():
"""Test read LSM: TZCYX (1, 19, 3, 512, 512) uint8, RGB, LZW."""
fname = data_file('lsm/earpax2isl11.lsm')
with TiffFile(fname) as tif:
assert tif.is_lsm
assert tif.byteorder == '<'
assert len(tif.pages) == 38
assert len(tif.series) == 2
# assert page properties
page = tif.pages[0]
assert page.is_lsm
assert not page.is_contiguous
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert corrected strip_byte_counts
assert page.tags['StripByteCounts'].value == (262144, 262144, 262144)
assert page.databytecounts == (131514, 192933, 167874)
page = tif.pages[1]
assert page.is_reduced
assert page.photometric == RGB
assert page.planarconfig == SEPARATE
assert page.compression == NONE
assert page.imagewidth == 128
assert page.imagelength == 128
assert page.samplesperpixel == 3
assert page.bitspersample == 8
# assert series properties
series = tif.series[0]
assert series.shape == (1, 19, 3, 512, 512)
assert series.dtype.name == 'uint8'
assert series.axes == 'TZCYX'
if 1:
series = tif.series[1]
assert series.shape == (1, 19, 3, 128, 128)
assert series.dtype.name == 'uint8'
assert series.axes == 'TZCYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (1, 19, 3, 512, 512)
assert data.dtype.name == 'uint8'
assert tuple(data[0, 18, :, 200, 320]) == (17, 22, 21)
if 1:
data = tif.asarray(series=1)
assert isinstance(data, numpy.ndarray)
assert data.shape == (1, 19, 3, 128, 128)
assert data.dtype.name == 'uint8'
assert tuple(data[0, 18, :, 64, 64]) == (25, 5, 33)
# assert lsm_info tags
tags = tif.lsm_metadata
assert tags['DimensionX'] == 512
assert tags['DimensionY'] == 512
assert tags['DimensionZ'] == 19
assert tags['DimensionTime'] == 1
assert tags['DimensionChannels'] == 3
# assert lsm_scan_info tags
tags = tif.lsm_metadata['ScanInformation']
assert tags['ScanMode'] == 'Stack'
assert tags['User'] == 'megason'
assert__str__(tif)
@pytest.mark.skipif(SKIP_HUGE or IS_32BIT, reason='huge image')
def test_read_lsm_mb231paxgfp_060214():
"""Test read LSM with many LZW compressed pages."""
# TZCYX (Stack mode), (60, 31, 2, 512, 512), 3720
fname = data_file('lsm/MB231paxgfp_060214.lsm')
with TiffFile(fname) as tif:
assert tif.is_lsm
assert tif.byteorder == '<'
assert len(tif.pages) == 3720
assert len(tif.series) == 2
# assert page properties
page = tif.pages[0]
assert page.is_lsm
assert not page.is_contiguous
assert page.compression == LZW
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 2
page = tif.pages[1]
assert page.is_reduced
assert page.photometric == RGB
assert page.planarconfig == SEPARATE
assert page.compression == NONE
assert page.imagewidth == 128
assert page.imagelength == 128
assert page.samplesperpixel == 3
assert page.bitspersample == 8
# assert series properties
series = tif.series[0]
assert series.shape == (60, 31, 2, 512, 512)
assert series.dtype.name == 'uint16'
assert series.axes == 'TZCYX'
if 1:
series = tif.series[1]
assert series.shape == (60, 31, 3, 128, 128)
assert series.dtype.name == 'uint8'
assert series.axes == 'TZCYX'
# assert data
data = tif.asarray(out='memmap', maxworkers=None)
assert isinstance(data, numpy.core.memmap)
assert data.shape == (60, 31, 2, 512, 512)
assert data.dtype.name == 'uint16'
assert data[59, 30, 1, 256, 256] == 222
del data
# assert lsm_info tags
tags = tif.lsm_metadata
assert tags['DimensionX'] == 512
assert tags['DimensionY'] == 512
assert tags['DimensionZ'] == 31
assert tags['DimensionTime'] == 60
assert tags['DimensionChannels'] == 2
# assert some lsm_scan_info tags
tags = tif.lsm_metadata['ScanInformation']
assert tags['ScanMode'] == 'Stack'
assert tags['User'] == 'lfdguest1'
assert__str__(tif, 0)
def test_read_lsm_lzw_no_eoi():
"""Test read LSM with LZW compressed strip without EOI."""
# The first LZW compressed strip in page 834 has no EOI
# such that too much data is returned from the decoder and
# the data of the 2nd channel was getting corrupted
fname = data_file('lsm/MB231paxgfp_060214.lsm')
with TiffFile(fname) as tif:
assert tif.is_lsm
assert tif.byteorder == '<'
assert len(tif.pages) == 3720
assert len(tif.series) == 2
# assert page properties
page = tif.pages[0]
assert not page.is_contiguous
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 2
page = tif.pages[834]
assert isinstance(page, TiffFrame)
assert page.databytecounts == (454886, 326318)
assert page.dataoffsets == (344655101, 345109987)
# assert second channel is not corrupted
data = page.asarray()
assert tuple(data[:, 0, 0]) == (288, 238)
assert__str__(tif, 0)
def test_read_stk_zseries():
"""Test read MetaMorph STK z-series."""
fname = data_file('stk/zseries.stk')
with TiffFile(fname) as tif:
assert tif.is_stk
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 320
assert page.imagelength == 256
assert page.bitspersample == 16
assert page.samplesperpixel == 1
assert page.tags['Software'].value == 'MetaMorph'
assert page.tags['DateTime'].value == '2000:01:02 15:06:33'
assert page.description.startswith('Acquired from MV-1500')
# assert uic tags
tags = tif.stk_metadata
assert tags['Name'] == 'Z Series'
assert tags['NumberPlanes'] == 11
assert ''.join(tags['StageLabel']) == ''
assert tags['ZDistance'][10] == 2.5
assert len(tags['Wavelengths']) == 11
assert tags['Wavelengths'][10] == 490.0
assert len(tags['AbsoluteZ']) == 11
assert tags['AbsoluteZ'][10] == 150.0
assert tuple(tags['StagePosition'][10]) == (0.0, 0.0)
assert tuple(tags['CameraChipOffset'][10]) == (0.0, 0.0)
assert tags['PlaneDescriptions'][0].startswith('Acquired from MV-1500')
assert str(tags['DatetimeCreated'][0]) == (
'2000-02-02T15:06:02.000783000')
# assert series properties
series = tif.series[0]
assert series.shape == (11, 256, 320)
assert series.dtype.name == 'uint16'
assert series.axes == 'ZYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (11, 256, 320)
assert data.dtype.name == 'uint16'
assert data[8, 159, 255] == 1156
assert__str__(tif)
def test_read_stk_zser24():
"""Test read MetaMorph STK RGB z-series."""
fname = data_file('stk/zser24.stk')
with TiffFile(fname) as tif:
assert tif.is_stk
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == RGB
assert page.compression == NONE
assert page.imagewidth == 160
assert page.imagelength == 128
assert page.bitspersample == 8
assert page.samplesperpixel == 3
assert page.tags['Software'].value == 'MetaMorph'
assert page.tags['DateTime'].value == '2000:01:02 15:11:23'
# assert uic tags
tags = tif.stk_metadata
assert tags['Name'] == 'Color Encoded'
assert tags['NumberPlanes'] == 11
assert ''.join(tags['StageLabel']) == ''
assert tags['ZDistance'][10] == 2.5
assert len(tags['Wavelengths']) == 11
assert tags['Wavelengths'][10] == 510.0
assert len(tags['AbsoluteZ']) == 11
assert tags['AbsoluteZ'][10] == 150.0
assert tuple(tags['StagePosition'][10]) == (0.0, 0.0)
assert tuple(tags['CameraChipOffset'][10]) == (320., 256.)
assert str(tags['DatetimeCreated'][0]) == (
'2000-02-02T15:10:34.000264000')
# assert series properties
series = tif.series[0]
assert series.shape == (11, 128, 160, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'ZYXS'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (11, 128, 160, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[8, 100, 135]) == (70, 63, 0)
assert__str__(tif)
def test_read_stk_diatoms3d():
"""Test read MetaMorph STK time-series."""
fname = data_file('stk/diatoms3d.stk')
with TiffFile(fname) as tif:
assert tif.is_stk
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 196
assert page.imagelength == 191
assert page.bitspersample == 8
assert page.samplesperpixel == 1
assert page.tags['Software'].value == 'MetaMorph'
assert page.tags['DateTime'].value == '2000:01:04 14:57:22'
# assert uic tags
tags = tif.stk_metadata
assert tags['Name'] == 'diatoms3d'
assert tags['NumberPlanes'] == 10
assert ''.join(tags['StageLabel']) == ''
assert tags['ZDistance'][9] == 3.54545
assert len(tags['Wavelengths']) == 10
assert tags['Wavelengths'][9] == 440.0
assert len(tags['AbsoluteZ']) == 10
assert tags['AbsoluteZ'][9] == 12898.15
assert tuple(tags['StagePosition'][9]) == (0.0, 0.0)
assert tuple(tags['CameraChipOffset'][9]) == (206., 148.)
assert tags['PlaneDescriptions'][0].startswith(
'Acquired from Flashbus.')
assert str(tags['DatetimeCreated'][0]) == (
'2000-02-04T14:38:37.000738000')
# assert series properties
series = tif.series[0]
assert series.shape == (10, 191, 196)
assert series.dtype.name == 'uint8'
assert series.axes == 'ZYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (10, 191, 196)
assert data.dtype.name == 'uint8'
assert data[8, 100, 135] == 223
assert__str__(tif)
def test_read_stk_greenbeads():
"""Test read MetaMorph STK time-series, but time_created is corrupt (?)."""
# 8bit palette is present but should not be applied
fname = data_file('stk/greenbeads.stk')
with TiffFile(fname) as tif:
assert tif.is_stk
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == PALETTE
assert page.compression == NONE
assert page.imagewidth == 298
assert page.imagelength == 322
assert page.bitspersample == 8
assert page.samplesperpixel == 1
assert page.tags['Software'].value == 'MetaMorph 7.5.3.0'
assert page.tags['DateTime'].value == '2008:05:09 17:35:32'
# assert uic tags
tags = tif.stk_metadata
assert tags['Name'] == 'Green'
assert tags['NumberPlanes'] == 79
assert tags['ZDistance'][1] == 0.0
assert len(tags['Wavelengths']) == 79
assert tuple(tags['CameraChipOffset'][0]) == (0.0, 0.0)
assert str(tags['DatetimeModified'][0]) == (
'2008-05-09T17:35:33.000274000')
assert 'AbsoluteZ' not in tags
# assert series properties
series = tif.series[0]
assert series.shape == (79, 322, 298)
assert series.dtype.name == 'uint8'
assert series.axes == 'IYX' # corrupt time_created
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (79, 322, 298)
assert data.dtype.name == 'uint8'
assert data[43, 180, 102] == 205
assert__str__(tif)
def test_read_stk_10xcalib():
"""Test read MetaMorph STK two planes, not Z or T series."""
fname = data_file('stk/10xcalib.stk')
with TiffFile(fname) as tif:
assert tif.is_stk
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric != PALETTE
assert page.compression == NONE
assert page.imagewidth == 640
assert page.imagelength == 480
assert page.bitspersample == 8
assert page.samplesperpixel == 1
assert page.tags['Software'].value == 'MetaMorph'
assert page.tags['DateTime'].value == '2000:03:28 09:24:37'
# assert uic tags
tags = tif.stk_metadata
assert tags['Name'] == '10xcalib'
assert tags['NumberPlanes'] == 2
assert tuple(tags['Wavelengths']) == (440.0, 440.0)
assert tags['XCalibration'] == 1.24975007
assert tags['YCalibration'] == 1.24975007
# assert series properties
series = tif.series[0]
assert series.shape == (2, 480, 640)
assert series.dtype.name == 'uint8'
assert series.axes == 'IYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (2, 480, 640)
assert data.dtype.name == 'uint8'
assert data[1, 339, 579] == 56
assert__str__(tif)
@pytest.mark.skipif(IS_32BIT, reason='might segfault due to low memory')
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
def test_read_stk_112508h100():
"""Test read MetaMorph STK large time-series."""
fname = data_file('stk/112508h100.stk')
with TiffFile(fname) as tif:
assert tif.is_stk
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric != PALETTE
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 128
assert page.bitspersample == 16
assert page.samplesperpixel == 1
assert page.tags['Software'].value == 'MetaMorph 7.5.3.0'
assert page.tags['DateTime'].value == '2008:11:25 18:59:20'
# assert uic tags
tags = tif.stk_metadata
assert tags['Name'] == 'Photometrics'
assert tags['NumberPlanes'] == 2048
assert len(tags['PlaneDescriptions']) == 2048
assert tags['PlaneDescriptions'][0].startswith(
'Acquired from Photometrics\r\n')
assert tags['CalibrationUnits'] == 'pixel'
# assert series properties
series = tif.series[0]
assert series.shape == (2048, 128, 512)
assert series.dtype.name == 'uint16'
assert series.axes == 'TYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (2048, 128, 512)
assert data.dtype.name == 'uint16'
assert data[2047, 64, 128] == 7132
assert__str__(tif)
def test_read_ndpi_cmu_1_ndpi():
"""Test read Hamamatsu NDPI slide, JPEG."""
fname = data_file('HamamatsuNDPI/CMU-1.ndpi')
with TiffFile(fname) as tif:
assert tif.is_ndpi
assert len(tif.pages) == 5
assert len(tif.series) == 5
for page in tif.pages:
assert page.ndpi_tags['Model'] == 'NanoZoomer'
# first page
page = tif.pages[0]
assert page.is_ndpi
assert page.photometric == YCBCR
assert page.compression == JPEG
assert page.shape == (38144, 51200, 3)
assert page.ndpi_tags['Magnification'] == 20.0
# page 4
page = tif.pages[4]
assert page.is_ndpi
assert page.photometric == YCBCR
assert page.compression == JPEG
assert page.shape == (408, 1191, 3)
assert page.ndpi_tags['Magnification'] == -1.0
assert page.asarray()[226, 629, 0] == 167
assert__str__(tif)
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
def test_read_ndpi_cmu_2():
"""Test read Hamamatsu NDPI slide, JPEG."""
# JPEG stream too large to be opened with libjpeg
fname = data_file('HamamatsuNDPI/CMU-2.ndpi')
with TiffFile(fname) as tif:
assert tif.is_ndpi
assert len(tif.pages) == 6
assert len(tif.series) == 6
for page in tif.pages:
assert page.ndpi_tags['Model'] == 'NanoZoomer'
# first page
page = tif.pages[0]
assert page.is_ndpi
assert page.photometric == YCBCR
assert page.compression == JPEG
assert page.shape == (33792, 79872, 3)
assert page.ndpi_tags['Magnification'] == 20.0
with pytest.raises(RuntimeError):
page.asarray()
# page 5
page = tif.pages[-1]
assert page.is_ndpi
assert page.photometric == YCBCR
assert page.compression == JPEG
assert page.shape == (408, 1191, 3)
assert page.ndpi_tags['Magnification'] == -1.0
assert page.asarray()[226, 629, 0] == 181
assert__str__(tif)
def test_read_svs_cmu_1():
"""Test read Aperio SVS slide, JPEG and LZW."""
fname = data_file('AperioSVS/CMU-1.svs')
with TiffFile(fname) as tif:
assert tif.is_svs
assert not tif.is_scanimage
assert len(tif.pages) == 6
assert len(tif.series) == 6
for page in tif.pages:
svs_description_metadata(page.description)
# first page
page = tif.pages[0]
assert page.is_svs
assert page.is_chroma_subsampled
assert page.photometric == RGB
assert page.is_tiled
assert page.compression == JPEG
assert page.shape == (32914, 46000, 3)
metadata = svs_description_metadata(page.description)
assert metadata['Aperio Image Library'] == 'v10.0.51'
assert metadata['Originalheight'] == 33014
# page 4
page = tif.pages[4]
assert page.is_svs
assert page.is_reduced
assert page.photometric == RGB
assert page.compression == LZW
assert page.shape == (463, 387, 3)
metadata = svs_description_metadata(page.description)
assert metadata[''] == 'label 387x463'
assert__str__(tif)
def test_read_svs_jp2k_33003_1():
"""Test read Aperio SVS slide, JP2000 and LZW."""
fname = data_file('AperioSVS/JP2K-33003-1.svs')
with TiffFile(fname) as tif:
assert tif.is_svs
assert not tif.is_scanimage
assert len(tif.pages) == 6
assert len(tif.series) == 6
for page in tif.pages:
svs_description_metadata(page.description)
# first page
page = tif.pages[0]
assert page.is_svs
assert not page.is_chroma_subsampled
assert page.photometric == RGB
assert page.is_tiled
assert page.compression.name == 'APERIO_JP2000_YCBC'
assert page.shape == (17497, 15374, 3)
metadata = svs_description_metadata(page.description)
assert metadata['Aperio Image Library'] == 'v10.0.50'
assert metadata['Originalheight'] == 17597
# page 4
page = tif.pages[4]
assert page.is_svs
assert page.is_reduced
assert page.photometric == RGB
assert page.compression == LZW
assert page.shape == (422, 415, 3)
metadata = svs_description_metadata(page.description)
assert metadata[''] == 'label 415x422'
assert__str__(tif)
def test_read_scanimage_metadata():
"""Test read ScanImage metadata."""
fname = data_file('ScanImage/TS_UnitTestImage_BigTIFF.tif')
with open(fname, 'rb') as fh:
frame_data, roi_data = read_scanimage_metadata(fh)
assert frame_data['SI.hChannels.channelType'] == ['stripe', 'stripe']
assert roi_data['RoiGroups']['imagingRoiGroup']['ver'] == 1
def test_read_scanimage_no_framedata():
"""Test read ScanImage no FrameData."""
fname = data_file('ScanImage/PSF001_ScanImage36.tif')
with TiffFile(fname) as tif:
assert tif.is_scanimage
assert len(tif.pages) == 100
assert len(tif.series) == 1
# no non-tiff scanimage_metadata
assert 'FrameData' not in tif.scanimage_metadata
# assert page properties
page = tif.pages[0]
assert page.is_scanimage
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 256
assert page.imagelength == 256
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# description tags
metadata = scanimage_description_metadata(page.description)
assert metadata['state.software.version'] == 3.6
assert__str__(tif)
def test_read_scanimage_bigtiff():
"""Test read ScanImage BigTIFF."""
fname = data_file('ScanImage/TS_UnitTestImage_BigTIFF.tif')
with TiffFile(fname) as tif:
assert tif.is_scanimage
assert len(tif.pages) == 3
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_scanimage
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# metadata in description, software, artist tags
metadata = scanimage_description_metadata(page.description)
assert metadata['frameNumbers'] == 1
metadata = scanimage_description_metadata(
page.tags['Software'].value)
assert metadata['SI.TIFF_FORMAT_VERSION'] == 3
metadata = scanimage_artist_metadata(page.tags['Artist'].value)
assert metadata['RoiGroups']['imagingRoiGroup']['ver'] == 1
metadata = tif.scanimage_metadata
assert metadata['FrameData']['SI.TIFF_FORMAT_VERSION'] == 3
assert metadata['RoiGroups']['imagingRoiGroup']['ver'] == 1
assert metadata['Description']['frameNumbers'] == 1
assert__str__(tif)
def test_read_ome_single_channel():
"""Test read OME image."""
# 2D (single image)
# OME-TIFF reference images from
# https://www.openmicroscopy.org/site/support/ome-model/ome-tiff
fname = data_file('OME/single-channel.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 439
assert page.imagelength == 167
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (167, 439)
assert series.dtype.name == 'int8'
assert series.axes == 'YX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (167, 439)
assert data.dtype.name == 'int8'
assert data[158, 428] == 91
assert__str__(tif)
def test_read_ome_multi_channel():
"""Test read OME multi channel image."""
# 2D (3 channels)
fname = data_file('OME/multi-channel.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 3
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 439
assert page.imagelength == 167
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (3, 167, 439)
assert series.dtype.name == 'int8'
assert series.axes == 'CYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 167, 439)
assert data.dtype.name == 'int8'
assert data[2, 158, 428] == 91
assert__str__(tif)
def test_read_ome_z_series():
"""Test read OME volume."""
# 3D (5 focal planes)
fname = data_file('OME/z-series.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 5
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 439
assert page.imagelength == 167
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (5, 167, 439)
assert series.dtype.name == 'int8'
assert series.axes == 'ZYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (5, 167, 439)
assert data.dtype.name == 'int8'
assert data[4, 158, 428] == 91
assert__str__(tif)
def test_read_ome_multi_channel_z_series():
"""Test read OME multi-channel volume."""
# 3D (5 focal planes, 3 channels)
fname = data_file('OME/multi-channel-z-series.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 15
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 439
assert page.imagelength == 167
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (3, 5, 167, 439)
assert series.dtype.name == 'int8'
assert series.axes == 'CZYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 5, 167, 439)
assert data.dtype.name == 'int8'
assert data[2, 4, 158, 428] == 91
assert__str__(tif)
def test_read_ome_time_series():
"""Test read OME time-series of images."""
# 3D (7 time points)
fname = data_file('OME/time-series.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 7
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 439
assert page.imagelength == 167
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (7, 167, 439)
assert series.dtype.name == 'int8'
assert series.axes == 'TYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (7, 167, 439)
assert data.dtype.name == 'int8'
assert data[6, 158, 428] == 91
assert__str__(tif)
def test_read_ome_multi_channel_time_series():
"""Test read OME time-series of multi-channel images."""
# 3D (7 time points, 3 channels)
fname = data_file('OME/multi-channel-time-series.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 21
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 439
assert page.imagelength == 167
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (7, 3, 167, 439)
assert series.dtype.name == 'int8'
assert series.axes == 'TCYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (7, 3, 167, 439)
assert data.dtype.name == 'int8'
assert data[6, 2, 158, 428] == 91
assert__str__(tif)
def test_read_ome_4d_series():
"""Test read OME time-series of volumes."""
# 4D (7 time points, 5 focal planes)
fname = data_file('OME/4D-series.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 35
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 439
assert page.imagelength == 167
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (7, 5, 167, 439)
assert series.dtype.name == 'int8'
assert series.axes == 'TZYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (7, 5, 167, 439)
assert data.dtype.name == 'int8'
assert data[6, 4, 158, 428] == 91
assert__str__(tif)
def test_read_ome_multi_channel_4d_series():
"""Test read OME time-series of multi-channel volumes."""
# 4D (7 time points, 5 focal planes, 3 channels)
fname = data_file('OME/multi-channel-4D-series.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 105
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 439
assert page.imagelength == 167
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (7, 3, 5, 167, 439)
assert series.dtype.name == 'int8'
assert series.axes == 'TCZYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (7, 3, 5, 167, 439)
assert data.dtype.name == 'int8'
assert data[6, 0, 4, 158, 428] == 91
assert__str__(tif)
def test_read_ome_modulo_flim():
"""Test read OME modulo FLIM."""
# Two channels each recorded at two timepoints and eight histogram bins
fname = data_file('OME/FLIM-modulo-sample.ome.tiff')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 32
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 180
assert page.imagelength == 200
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (2, 8, 2, 200, 180)
assert series.dtype.name == 'int8'
assert series.axes == 'THCYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (2, 8, 2, 200, 180)
assert data.dtype.name == 'int8'
assert data[1, 7, 1, 190, 161] == 92
assert__str__(tif)
def test_read_ome_modulo_spim():
"""Test read OME modulo SPIM."""
# 2x2 tile of planes each recorded at 4 angles
fname = data_file('OME/SPIM-modulo-sample.ome.tiff')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 192
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value == 'LOCI Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 160
assert page.imagelength == 220
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (3, 4, 2, 4, 2, 220, 160)
assert series.dtype.name == 'uint8'
assert series.axes == 'TRZACYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 4, 2, 4, 2, 220, 160)
assert data.dtype.name == 'uint8'
assert data[2, 3, 1, 3, 1, 210, 151] == 92
assert__str__(tif)
def test_read_ome_modulo_lambda():
"""Test read OME modulo LAMBDA."""
# Excitation of 5 wavelength [big-lambda] each recorded at 10 emission
# wavelength ranges [lambda].
fname = data_file('OME/LAMBDA-modulo-sample.ome.tiff')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 50
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value == 'LOCI Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 200
assert page.imagelength == 200
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (10, 5, 200, 200)
assert series.dtype.name == 'uint8'
assert series.axes == 'EPYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (10, 5, 200, 200)
assert data.dtype.name == 'uint8'
assert data[9, 4, 190, 192] == 92
assert__str__(tif)
def test_read_ome_multi_image_pixels():
"""Test read OME with three image series."""
fname = data_file('OME/multi-image-pixels.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 86
assert len(tif.series) == 3
# assert page properties
for (i, axes, shape) in ((0, 'CTYX', (2, 7, 555, 431)),
(1, 'TZYX', (6, 2, 461, 348)),
(2, 'TZCYX', (4, 5, 3, 239, 517))):
series = tif.series[i]
page = series.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value == 'LOCI Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == shape[-1]
assert page.imagelength == shape[-2]
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
assert series.shape == shape
assert series.dtype.name == 'uint8'
assert series.axes == axes
# assert data
data = tif.asarray(series=i)
assert isinstance(data, numpy.ndarray)
assert data.shape == shape
assert data.dtype.name == 'uint8'
assert__str__(tif)
def test_read_ome_zen_2chzt():
"""Test read OME time-series of two-channel volumes by ZEN 2011."""
fname = data_file('OME/zen_2chzt.ome.tiff')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 798
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value == 'ZEN 2011 (blue edition)'
assert page.compression == NONE
assert page.imagewidth == 400
assert page.imagelength == 300
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (2, 19, 21, 300, 400)
assert series.dtype.name == 'uint8'
assert series.axes == 'CTZYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (2, 19, 21, 300, 400)
assert data.dtype.name == 'uint8'
assert data[1, 10, 10, 100, 245] == 78
assert__str__(tif, 0)
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
def test_read_ome_multifile():
"""Test read OME CTZYX series in 86 files."""
# (2, 43, 10, 512, 512) CTZYX uint8 in 86 files, 10 pages each
fname = data_file('OME/tubhiswt-4D/tubhiswt_C0_TP10.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 10
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (2, 43, 10, 512, 512)
assert series.dtype.name == 'uint8'
assert series.axes == 'CTZYX'
# assert other files are closed
for page in tif.series[0].pages:
assert bool(page.parent.filehandle._fh) == (page.parent == tif)
# assert data
data = tif.asarray(out='memmap')
assert isinstance(data, numpy.core.memmap)
assert data.shape == (2, 43, 10, 512, 512)
assert data.dtype.name == 'uint8'
assert data[1, 42, 9, 426, 272] == 123
del data
# assert other files are still closed
for page in tif.series[0].pages:
assert bool(page.parent.filehandle._fh) == (page.parent == tif)
assert__str__(tif)
# assert all files stay open
# with TiffFile(fname) as tif:
# for page in tif.series[0].pages:
# self.assertTrue(page.parent.filehandle._fh)
# data = tif.asarray(out='memmap')
# for page in tif.series[0].pages:
# self.assertTrue(page.parent.filehandle._fh)
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
def test_read_ome_multifile_missing(caplog):
"""Test read OME referencing missing files."""
# (2, 43, 10, 512, 512) CTZYX uint8, 85 files missing
fname = data_file('OME/tubhiswt_C1_TP42.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 10
assert len(tif.series) == 1
assert 'failed to read' in caplog.text
# assert page properties
page = tif.pages[0]
TiffPage.__str__(page, 4)
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 8
assert page.samplesperpixel == 1
page = tif.pages[-1]
TiffPage.__str__(page, 4)
assert page.shape == (512, 512)
# assert series properties
series = tif.series[0]
assert series.shape == (2, 43, 10, 512, 512)
assert series.dtype.name == 'uint8'
assert series.axes == 'CTZYX'
# assert data
data = tif.asarray(out='memmap')
assert isinstance(data, numpy.core.memmap)
assert data.shape == (2, 43, 10, 512, 512)
assert data.dtype.name == 'uint8'
assert data[1, 42, 9, 426, 272] == 123
del data
assert__str__(tif)
def test_read_ome_rgb():
"""Test read OME RGB image."""
# https://github.com/openmicroscopy/bioformats/pull/1986
fname = data_file('OME/test_rgb.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 1280
assert page.imagelength == 720
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == (3, 720, 1280)
assert series.dtype.name == 'uint8'
assert series.axes == 'CYX'
assert series.offset == 17524
# assert data
data = tif.asarray()
assert data.shape == (3, 720, 1280)
assert data.dtype.name == 'uint8'
assert data[1, 158, 428] == 253
assert__str__(tif)
def test_read_ome_float_modulo_attributes():
"""Test read OME with floating point modulo attributes."""
# reported by Start Berg. File by Lorenz Maier.
fname = data_file('OME/float_modulo_attributes.ome.tiff')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '>'
assert len(tif.pages) == 2
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (2, 512, 512)
assert series.dtype.name == 'uint16'
assert series.axes == 'QYX'
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (2, 512, 512)
assert data.dtype.name == 'uint16'
assert data[1, 158, 428] == 51
assert__str__(tif)
def test_read_ome_cropped(caplog):
"""Test read bad OME by ImageJ cropping."""
# ImageJ produces invalid ome-xml when cropping
# http://lists.openmicroscopy.org.uk/pipermail/ome-devel/2013-December
# /002631.html
# Reported by Hadrien Mary on Dec 11, 2013
fname = data_file('ome/cropped.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 100
assert len(tif.series) == 1
assert 'invalid TiffData index' in caplog.text
# assert page properties
page = tif.pages[0]
assert page.tags['Software'].value[:15] == 'OME Bio-Formats'
assert page.imagewidth == 324
assert page.imagelength == 249
assert page.bitspersample == 16
# assert series properties
series = tif.series[0]
assert series.shape == (5, 10, 2, 249, 324)
assert series.dtype.name == 'uint16'
assert series.axes == 'TZCYX'
# assert data
data = tif.asarray()
assert data.shape == (5, 10, 2, 249, 324)
assert data.dtype.name == 'uint16'
assert data[4, 9, 1, 175, 123] == 9605
del data
assert__str__(tif)
def test_read_ome_nikon(caplog):
"""Test read bad OME by Nikon."""
# OME-XML references only first image
# received from E. Gratton
fname = data_file('OME/Nikon-cell011.ome.tif')
with TiffFile(fname) as tif:
assert tif.is_ome
assert tif.byteorder == '<'
assert len(tif.pages) == 1000
assert len(tif.series) == 1
assert 'index out of range' in caplog.text
# assert page properties
page = tif.pages[0]
assert page.photometric != RGB
assert page.imagewidth == 1982
assert page.imagelength == 1726
assert page.bitspersample == 16
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 1
assert series.offset == 16 # contiguous
assert series.shape == (1726, 1982)
assert series.dtype.name == 'uint16'
assert series.axes == 'YX'
assert__str__(tif)
with TiffFile(fname, is_ome=False) as tif:
assert not tif.is_ome
# assert series properties
series = tif.series[0]
assert len(series.pages) == 1000
assert series.offset is None # not contiguous
assert series.shape == (1000, 1726, 1982)
assert series.dtype.name == 'uint16'
assert series.axes == 'IYX'
assert__str__(tif)
# Test Andor
def test_read_andor_light_sheet_512p():
"""Test read Andor."""
# 12113x13453, uint16
fname = data_file('andor/light sheet 512px.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 100
assert len(tif.series) == 1
assert tif.is_andor
# assert page properties
page = tif.pages[0]
assert page.is_andor
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# assert metadata
t = page.andor_tags
assert t['SoftwareVersion'] == '4.23.30014.0'
assert t['Frames'] == 100.0
# assert series properties
series = tif.series[0]
assert series.shape == (100, 512, 512)
assert series.dtype.name == 'uint16'
assert series.axes == 'IYX'
# assert data
data = tif.asarray()
assert data.shape == (100, 512, 512)
assert data.dtype.name == 'uint16'
assert round(abs(data[50, 256, 256]-703), 7) == 0
assert__str__(tif, 0)
# Test NIH Image sample images (big endian with nih_image_header)
def test_read_nih_morph():
"""Test read NIH."""
# 388x252 uint8
fname = data_file('nihimage/morph.tiff')
with TiffFile(fname) as tif:
assert tif.is_nih
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 388
assert page.imagelength == 252
assert page.bitspersample == 8
# assert series properties
series = tif.series[0]
assert series.shape == (252, 388)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
# assert NIH tags
tags = tif.nih_metadata
assert tags['FileID'] == 'IPICIMAG'
assert tags['PixelsPerLine'] == 388
assert tags['nLines'] == 252
assert tags['ForegroundIndex'] == 255
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (252, 388)
assert data.dtype.name == 'uint8'
assert data[195, 144] == 41
assert__str__(tif)
def test_read_nih_silver_lake():
"""Test read NIH palette."""
# 259x187 16 bit palette
fname = data_file('nihimage/silver lake.tiff')
with TiffFile(fname) as tif:
assert tif.is_nih
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == PALETTE
assert page.imagewidth == 259
assert page.imagelength == 187
assert page.bitspersample == 8
# assert series properties
series = tif.series[0]
assert series.shape == (187, 259)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
# assert NIH tags
tags = tif.nih_metadata
assert tags['FileID'] == 'IPICIMAG'
assert tags['PixelsPerLine'] == 259
assert tags['nLines'] == 187
assert tags['ForegroundIndex'] == 109
# assert data
data = page.asrgb()
assert isinstance(data, numpy.ndarray)
assert data.shape == (187, 259, 3)
assert data.dtype.name == 'uint16'
assert tuple(data[86, 102, :]) == (26214, 39321, 39321)
assert__str__(tif)
def test_read_imagej_focal1():
"""Test read ImageJ 205x434x425 uint8."""
fname = data_file('imagej/focal1.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 205
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric != RGB
assert page.imagewidth == 425
assert page.imagelength == 434
assert page.bitspersample == 8
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert series.offset == 768
assert series.shape == (205, 434, 425)
assert series.dtype.name == 'uint8'
assert series.axes == 'IYX'
assert len(series._pages) == 1
assert len(series.pages) == 205
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.34k'
assert ijtags['images'] == 205
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (205, 434, 425)
assert data.dtype.name == 'uint8'
assert data[102, 216, 212] == 120
assert__str__(tif, 0)
def test_read_imagej_hela_cells():
"""Test read ImageJ 512x672 RGB uint16."""
fname = data_file('imagej/hela-cells.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.imagewidth == 672
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert series.shape == (512, 672, 3)
assert series.dtype.name == 'uint16'
assert series.axes == 'YXS'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.46i'
assert ijtags['channels'] == 3
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (512, 672, 3)
assert data.dtype.name == 'uint16'
assert tuple(data[255, 336]) == (440, 378, 298)
assert__str__(tif)
def test_read_imagej_flybrain():
"""Test read ImageJ 57x256x256 RGB."""
fname = data_file('imagej/flybrain.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 57
assert len(tif.series) == 1 # hyperstack
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.imagewidth == 256
assert page.imagelength == 256
assert page.bitspersample == 8
# assert series properties
series = tif.series[0]
assert series.shape == (57, 256, 256, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'ZYXS'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.43d'
assert ijtags['slices'] == 57
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (57, 256, 256, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[18, 108, 97]) == (165, 157, 0)
assert__str__(tif)
def test_read_imagej_confocal_series():
"""Test read ImageJ 25x2x400x400 ZCYX."""
fname = data_file('imagej/confocal-series.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 50
assert len(tif.series) == 1 # hyperstack
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 400
assert page.imagelength == 400
assert page.bitspersample == 8
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert series.shape == (25, 2, 400, 400)
assert series.dtype.name == 'uint8'
assert series.axes == 'ZCYX'
assert len(series._pages) == 1
assert len(series.pages) == 50
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.43d'
assert ijtags['images'] == len(tif.pages)
assert ijtags['channels'] == 2
assert ijtags['slices'] == 25
assert ijtags['hyperstack']
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (25, 2, 400, 400)
assert data.dtype.name == 'uint8'
assert tuple(data[12, :, 100, 300]) == (6, 66)
# assert only two pages are loaded
assert isinstance(tif.pages.pages[0], TiffPage)
assert isinstance(tif.pages.pages[1], TiffFrame)
assert tif.pages.pages[2] == 8001073
assert tif.pages.pages[-1] == 8008687
assert__str__(tif)
def test_read_imagej_graphite():
"""Test read ImageJ 1024x593 float32."""
fname = data_file('imagej/graphite1-1024.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 1024
assert page.imagelength == 593
assert page.bitspersample == 32
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 1
assert series.shape == (593, 1024)
assert series.dtype.name == 'float32'
assert series.axes == 'YX'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.47t'
assert round(abs(ijtags['max']-1686.10949707), 7) == 0
assert round(abs(ijtags['min']-852.08605957), 7) == 0
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (593, 1024)
assert data.dtype.name == 'float32'
assert round(abs(data[443, 656]-2203.040771484375), 7) == 0
assert__str__(tif)
def test_read_imagej_bat_cochlea_volume():
"""Test read ImageJ 114 images, no frames, slices, channels specified."""
fname = data_file('imagej/bat-cochlea-volume.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 114
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric != RGB
assert page.imagewidth == 121
assert page.imagelength == 154
assert page.bitspersample == 8
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 114
assert series.shape == (114, 154, 121)
assert series.dtype.name == 'uint8'
assert series.axes == 'IYX'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.20n'
assert ijtags['images'] == 114
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (114, 154, 121)
assert data.dtype.name == 'uint8'
assert data[113, 97, 61] == 255
assert__str__(tif)
def test_read_imagej_first_instar_brain():
"""Test read ImageJ 56x256x256x3 ZYXS."""
fname = data_file('imagej/first-instar-brain.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 56
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == RGB
assert page.imagewidth == 256
assert page.imagelength == 256
assert page.bitspersample == 8
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 56
assert series.shape == (56, 256, 256, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'ZYXS'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.44j'
assert ijtags['images'] == 56
assert ijtags['slices'] == 56
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (56, 256, 256, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[55, 151, 112]) == (209, 8, 58)
assert__str__(tif)
def test_read_imagej_fluorescentcells():
"""Test read ImageJ three channels."""
fname = data_file('imagej/FluorescentCells.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 3
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric == PALETTE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 8
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert series.shape == (3, 512, 512)
assert series.dtype.name == 'uint8'
assert series.axes == 'CYX'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.40c'
assert ijtags['images'] == 3
assert ijtags['channels'] == 3
# assert data
data = tif.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 512, 512)
assert data.dtype.name == 'uint8'
assert tuple(data[:, 256, 256]) == (57, 120, 13)
assert__str__(tif)
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
@pytest.mark.skipif(SKIP_EXTENDED, reason='large image')
def test_read_imagej_100000_pages():
"""Test read ImageJ with 100000 pages."""
# 100000x64x64
# file is big endian, memory mapped
fname = data_file('large/100000_pages.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 100000
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 64
assert page.imagelength == 64
assert page.bitspersample == 16
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 100000
assert series.shape == (100000, 64, 64)
assert series.dtype.name == 'uint16'
assert series.axes == 'TYX'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.48g'
assert round(abs(ijtags['max']-119.0), 7) == 0
assert round(abs(ijtags['min']-86.0), 7) == 0
# assert data
data = tif.asarray(out='memmap')
assert isinstance(data, numpy.core.memmap)
assert data.shape == (100000, 64, 64)
assert data.dtype.name == 'uint16'
assert round(abs(data[7310, 25, 25]-100), 7) == 0
del data
assert__str__(tif, 0)
def test_read_imagej_invalid_metadata(caplog):
"""Test read bad ImageJ metadata."""
# file contains 1 page but metadata claims 3500 images
# memory map big endian data
fname = data_file('sima/0.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 1
assert len(tif.series) == 1
assert 'invalid metadata or corrupted file' in caplog.text
# assert page properties
page = tif.pages[0]
assert page.photometric != RGB
assert page.imagewidth == 173
assert page.imagelength == 173
assert page.bitspersample == 16
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert series.offset == 8 # 8
assert series.shape == (173, 173)
assert series.dtype.name == 'uint16'
assert series.axes == 'YX'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['ImageJ'] == '1.49i'
assert ijtags['images'] == 3500
# assert data
data = tif.asarray(out='memmap')
assert isinstance(data, numpy.core.memmap)
assert data.shape == (173, 173)
assert data.dtype.name == 'uint16'
assert data[94, 34] == 1257
del data
assert__str__(tif)
def test_read_imagej_invalid_hyperstack():
"""Test read bad ImageJ hyperstack."""
# file claims to be a hyperstack but is not stored as such
# produced by OME writer
# reported by Taras Golota on 10/27/2016
fname = data_file('imagej/X0.ome.CTZ.perm.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '<'
assert len(tif.pages) == 48 # not a hyperstack
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.photometric != RGB
assert page.imagewidth == 1392
assert page.imagelength == 1040
assert page.bitspersample == 16
assert page.is_contiguous
# assert series properties
series = tif.series[0]
assert series.offset is None # not contiguous
assert series.shape == (2, 4, 6, 1040, 1392)
assert series.dtype.name == 'uint16'
assert series.axes == 'TZCYX'
# assert ImageJ tags
ijtags = tif.imagej_metadata
assert ijtags['hyperstack']
assert ijtags['images'] == 48
assert__str__(tif)
def test_read_fluoview_lsp1_v_laser():
"""Test read FluoView CTYX."""
# raises 'UnicodeWarning: Unicode equal comparison failed' on Python 2
fname = data_file('fluoview/lsp1-V-laser0.3-1.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 100
assert len(tif.series) == 1
assert tif.is_fluoview
# assert page properties
page = tif.pages[0]
assert page.is_fluoview
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 256
assert page.imagelength == 256
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# assert metadata
m = fluoview_description_metadata(page.description)
assert m['Version Info']['FLUOVIEW Version'] == (
'FV10-ASW ,ValidBitColunt=12')
assert tuple(m['LUT Ch1'][255]) == (255, 255, 255)
mm = tif.fluoview_metadata
assert mm['ImageName'] == 'lsp1-V-laser0.3-1.oib'
# assert series properties
series = tif.series[0]
assert series.shape == (2, 50, 256, 256)
assert series.dtype.name == 'uint16'
assert series.axes == 'CTYX'
# assert data
data = tif.asarray()
assert data.shape == (2, 50, 256, 256)
assert data.dtype.name == 'uint16'
assert round(abs(data[1, 36, 128, 128]-824), 7) == 0
assert__str__(tif)
@pytest.mark.skipif(SKIP_HUGE, reason='huge image')
@pytest.mark.skipif(IS_32BIT, reason='MemoryError on 32 bit')
def test_read_fluoview_120816_bf_f0000():
"""Test read FluoView TZYX."""
fname = data_file('fluoview/120816_bf_f0000.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 864
assert len(tif.series) == 1
assert tif.is_fluoview
# assert page properties
page = tif.pages[0]
assert page.is_fluoview
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 1024
assert page.imagelength == 1024
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# assert metadata
m = fluoview_description_metadata(page.description)
assert m['Environment']['User'] == 'admin'
assert m['Region Info (Fields) Field']['Width'] == 1331.2
m = tif.fluoview_metadata
assert m['ImageName'] == '120816_bf'
# assert series properties
series = tif.series[0]
assert series.shape == (144, 6, 1024, 1024)
assert series.dtype.name == 'uint16'
assert series.axes == 'TZYX'
# assert data
data = tif.asarray()
assert data.shape == (144, 6, 1024, 1024)
assert data.dtype.name == 'uint16'
assert round(abs(data[1, 2, 128, 128]-8317), 7) == 0
assert__str__(tif)
def test_read_metaseries():
"""Test read MetaSeries 1040x1392 uint16, LZW."""
# Strips do not contain an EOI code as required by the TIFF spec.
fname = data_file('metaseries/metaseries.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 1392
assert page.imagelength == 1040
assert page.bitspersample == 16
# assert metadata
assert page.description.startswith('')
# assert series properties
series = tif.series[0]
assert series.shape == (1040, 1392)
assert series.dtype.name == 'uint16'
assert series.axes == 'YX'
# assert data
data = tif.asarray()
assert data.shape == (1040, 1392)
assert data.dtype.name == 'uint16'
assert data[256, 256] == 1917
del data
assert__str__(tif)
def test_read_metaseries_g4d7r():
"""Test read Metamorph/Metaseries."""
# 12113x13453, uint16
fname = data_file('metaseries/g4d7r.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
assert tif.is_metaseries
# assert page properties
page = tif.pages[0]
assert page.is_metaseries
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 13453
assert page.imagelength == 12113
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# assert metadata
m = metaseries_description_metadata(page.description)
assert m['ApplicationVersion'] == '7.8.6.0'
assert m['PlaneInfo']['pixel-size-x'] == 13453
assert m['SetInfo']['number-of-planes'] == 1
# assert series properties
series = tif.series[0]
assert series.shape == (12113, 13453)
assert series.dtype.name == 'uint16'
assert series.axes == 'YX'
# assert data
data = tif.asarray(out='memmap')
assert isinstance(data, numpy.core.memmap)
assert data.shape == (12113, 13453)
assert data.dtype.name == 'uint16'
assert round(abs(data[512, 2856]-4095), 7) == 0
del data
assert__str__(tif)
def test_read_mdgel_rat():
"""Test read Molecular Dynamics GEL."""
# Second page does not contain data, only private tags
# TZYX, uint16, OME multifile TIFF
fname = data_file('mdgel/rat.gel')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 2
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 1528
assert page.imagelength == 413
assert page.bitspersample == 16
assert page.samplesperpixel == 1
assert page.tags['Software'].value == (
"ImageQuant Software Release Version 2.0")
assert page.tags['PageName'].value == r"C:\DATA\RAT.GEL"
# assert 2nd page properties
page = tif.pages[1]
assert page.is_mdgel
assert page.imagewidth == 0
assert page.imagelength == 0
assert page.bitspersample == 1
assert page.samplesperpixel == 1
assert page.tags['MDFileTag'].value == 2
assert page.tags['MDScalePixel'].value == (1, 21025)
assert len(page.tags['MDColorTable'].value) == 17
md = tif.mdgel_metadata
assert md['SampleInfo'] == "Rat slices from Dr. Schweitzer"
assert md['PrepDate'] == "12 July 90"
assert md['PrepTime'] == "40hr"
assert md['FileUnits'] == "Counts"
# assert series properties
series = tif.series[0]
assert series.shape == (413, 1528)
assert series.dtype.name == 'float32'
assert series.axes == 'YX'
# assert data
data = series.asarray()
assert isinstance(data, numpy.ndarray)
assert data.shape == (413, 1528)
assert data.dtype.name == 'float32'
assert round(abs(data[260, 740]-399.1728515625), 7) == 0
assert__str__(tif)
def test_read_mediacy_imagepro():
"""Test read Media Cybernetics SEQ."""
# TZYX, uint16, OME multifile TIFF
fname = data_file('mediacy/imagepro.tif')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_mediacy
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 201
assert page.imagelength == 201
assert page.bitspersample == 8
assert page.samplesperpixel == 1
assert page.tags['Software'].value == 'Image-Pro Plus'
assert page.tags['MC_Id'].value[:-1] == b'MC TIFF 4.0'
# assert series properties
series = tif.series[0]
assert series.shape == (201, 201)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
# assert data
data = tif.asarray()
assert data.shape == (201, 201)
assert data.dtype.name == 'uint8'
assert round(abs(data[120, 34]-4), 7) == 0
assert__str__(tif)
def test_read_pilatus_100k():
"""Test read Pilatus."""
fname = data_file('TvxPilatus/Pilatus100K_scan030_033.tiff')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert tif.is_pilatus
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 487
assert page.imagelength == 195
assert page.bitspersample == 32
assert page.samplesperpixel == 1
# assert metadata
assert page.tags['Model'].value == (
'PILATUS 100K, S/N 1-0230, Cornell University')
attr = pilatus_description_metadata(page.description)
assert attr['Tau'] == 1.991e-07
assert attr['Silicon'] == 0.000320
assert__str__(tif)
def test_read_pilatus_gibuf2():
"""Test read Pilatus."""
fname = data_file('TvxPilatus/GIbuf2_A9_18_001_0009.tiff')
with TiffFile(fname) as tif:
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert tif.is_pilatus
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 487
assert page.imagelength == 195
assert page.bitspersample == 32
assert page.samplesperpixel == 1
# assert metadata
assert page.tags['Model'].value == 'PILATUS 100K-S, S/N 1-0299,'
attr = pilatus_description_metadata(page.description)
assert attr['Filter_transmission'] == 1.0
assert attr['Silicon'] == 0.000320
assert__str__(tif)
def test_read_epics_attrib():
"""Test read EPICS."""
fname = data_file('epics/attrib.tif')
with TiffFile(fname) as tif:
assert tif.is_epics
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert series properties
series = tif.series[0]
assert series.shape == (2048, 2048)
assert series.dtype.name == 'uint16'
assert series.axes == 'YX'
# assert page properties
page = tif.pages[0]
assert page.shape == (2048, 2048)
assert page.imagewidth == 2048
assert page.imagelength == 2048
assert page.bitspersample == 16
assert page.is_contiguous
# assert EPICS tags
tags = tif.epics_metadata
assert tags['timeStamp'] == datetime.datetime(
1995, 6, 2, 11, 31, 31, 571414)
assert tags['uniqueID'] == 15
assert tags['Focus'] == 0.6778
assert__str__(tif)
def test_read_tvips_tietz_16bit():
"""Test read TVIPS metadata."""
# file provided by Marco Oster on 10/26/2016
fname = data_file('tvips/test_tietz_16bit.tif')
with TiffFile(fname) as tif:
assert tif.is_tvips
tvips = tif.tvips_metadata
assert tvips['Magic'] == 0xaaaaaaaa
assert tvips['ImageFolder'] == u'B:\\4Marco\\Images\\Tiling_EMTOOLS\\'
assert__str__(tif)
def test_read_geotiff_dimapdocument():
"""Test read GeoTIFF with 43 MB XML tag value."""
# tag 65000 45070067s @487 "'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert series properties
series = tif.series[0]
assert series.shape == (1830, 1830)
assert series.dtype.name == 'uint16'
assert series.axes == 'YX'
# assert page properties
page = tif.pages[0]
assert page.shape == (1830, 1830)
assert page.imagewidth == 1830
assert page.imagelength == 1830
assert page.bitspersample == 16
assert page.is_contiguous
assert page.tags['65000'].value.startswith(
'')
# assert GeoTIFF tags
tags = tif.geotiff_metadata
assert tags['GTCitationGeoKey'] == 'WGS 84 / UTM zone 29N'
assert tags['ProjectedCSTypeGeoKey'] == 32629
assert_array_almost_equal(
tags['ModelTransformation'],
[[60., 0., 0., 6.e5], [0., -60., 0., 5900040.],
[0., 0., 0., 0.], [0., 0., 0., 1.]])
assert__str__(tif)
def test_read_geotiff_spaf27_markedcorrect():
"""Test read GeoTIFF."""
fname = data_file('geotiff/spaf27_markedcorrect.tif')
with TiffFile(fname) as tif:
assert tif.is_geotiff
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert series properties
series = tif.series[0]
assert series.shape == (20, 20)
assert series.dtype.name == 'uint8'
assert series.axes == 'YX'
# assert page properties
page = tif.pages[0]
assert page.shape == (20, 20)
assert page.imagewidth == 20
assert page.imagelength == 20
assert page.bitspersample == 8
assert page.is_contiguous
# assert GeoTIFF tags
tags = tif.geotiff_metadata
assert tags['GTCitationGeoKey'] == 'NAD27 / California zone VI'
assert tags['GeogAngularUnitsGeoKey'] == 9102
assert tags['ProjFalseOriginLatGeoKey'] == 32.1666666666667
assert_array_almost_equal(tags['ModelPixelScale'],
[195.509321, 198.32184, 0])
assert__str__(tif)
def test_read_qpi():
"""Test read PerkinElmer-QPI."""
fname = data_file('PerkinElmer-QPI/'
'LuCa-7color_[13860,52919]_1x1component_data.tiff')
with TiffFile(fname) as tif:
assert len(tif.series) == 2
assert len(tif.pages) == 9
assert tif.is_qpi
page = tif.pages[0]
assert page.compression == LZW
assert page.photometric == MINISBLACK
assert page.planarconfig == CONTIG
assert page.imagewidth == 1868
assert page.imagelength == 1400
assert page.bitspersample == 32
assert page.samplesperpixel == 1
assert page.tags['Software'].value == 'PerkinElmer-QPI'
# assert data
image = tif.asarray()
assert image.shape == (8, 1400, 1868)
assert image.dtype == 'float32'
assert image[7, 1200, 1500] == 2.2132580280303955
image = tif.asarray(series=1)
assert image.shape == (350, 467, 3)
assert image.dtype == 'uint8'
assert image[300, 400, 1] == 48
assert__str__(tif)
def test_read_zif():
"""Test read Zoomable Image Format ZIF."""
fname = data_file('zif/ZoomifyImageExample.zif')
with TiffFile(fname) as tif:
# assert tif.is_zif
assert len(tif.pages) == 5
assert len(tif.series) == 5
for page in tif.pages:
assert page.description == ('Created by Objective '
'Pathology Services')
# first page
page = tif.pages[0]
assert page.photometric == YCBCR
assert page.compression == JPEG
assert page.shape == (3120, 2080, 3)
assert tuple(page.asarray()[3110, 2070, :]) == (27, 45, 59)
# page 4
page = tif.pages[-1]
assert page.photometric == YCBCR
assert page.compression == JPEG
assert page.shape == (195, 130, 3)
assert tuple(page.asarray()[191, 127, :]) == (30, 49, 66)
assert__str__(tif)
def test_read_sis():
"""Test read Olympus SIS."""
fname = data_file('sis/4A5IE8EM_F00000409.tif')
with TiffFile(fname) as tif:
assert tif.is_sis
assert tif.byteorder == '<'
assert len(tif.pages) == 122
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.imagewidth == 353
assert page.imagelength == 310
assert page.bitspersample == 16
assert page.samplesperpixel == 1
assert page.tags['Software'].value == 'analySIS 5.0'
# assert data
data = tif.asarray()
assert data.shape == (61, 2, 310, 353)
assert data[30, 1, 256, 256] == 210
# assert metadata
sis = tif.sis_metadata
assert sis['axes'] == 'TC'
assert sis['shape'] == (61, 2)
assert sis['Band'][1]['BandName'] == 'Fura380'
assert sis['Band'][0]['LUT'].shape == (256, 3)
assert sis['Time']['TimePos'].shape == (61,)
assert sis['name'] == 'Hela-Zellen'
assert sis['magnification'] == 60.0
assert__str__(tif)
def test_read_sis_noini():
"""Test read Olympus SIS without INI tag."""
fname = data_file('sis/110.tif')
with TiffFile(fname) as tif:
assert tif.is_sis
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.imagewidth == 2560
assert page.imagelength == 1920
assert page.bitspersample == 8
assert page.samplesperpixel == 3
# assert metadata
sis = tif.sis_metadata
assert 'axes' not in sis
assert sis['magnification'] == 20.0
assert__str__(tif)
def test_read_sem_metadata():
"""Test read Zeiss SEM metadata."""
# file from hyperspy tests
fname = data_file('hyperspy/test_tiff_Zeiss_SEM_1k.tif')
with TiffFile(fname) as tif:
assert tif.is_sem
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == PALETTE
assert page.imagewidth == 1024
assert page.imagelength == 768
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert data and metadata
data = page.asrgb()
assert tuple(data[563, 320]) == (38550, 38550, 38550)
sem = tif.sem_metadata
assert sem[''][3] == 2.614514e-06
assert sem['ap_date'] == ('Date', '23 Dec 2015')
assert sem['ap_time'] == ('Time', '9:40:32')
assert sem['dp_image_store'] == ('Store resolution', '1024 * 768')
if not IS_PY2:
assert sem['ap_fib_fg_emission_actual'] == (
'Flood Gun Emission Actual', 0.0, u'µA')
else:
assert sem['ap_fib_fg_emission_actual'] == (
'Flood Gun Emission Actual', 0.0, '\xb5A')
assert__str__(tif)
def test_read_sem_bad_metadata():
"""Test read Zeiss SEM metadata with wrong length."""
# reported by Klaus Schwarzburg on 8/27/2018
fname = data_file('issues/sem_bad_metadata.tif')
with TiffFile(fname) as tif:
assert tif.is_sem
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == PALETTE
assert page.imagewidth == 1024
assert page.imagelength == 768
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert data and metadata
data = page.asrgb()
assert tuple(data[350, 150]) == (17476, 17476, 17476)
sem = tif.sem_metadata
assert sem['sv_version'][1] == 'V05.07.00.00 : 08-Jul-14'
assert__str__(tif)
def test_read_fei_metadata():
"""Test read Helios FEI metadata."""
# file from hyperspy tests
fname = data_file('hyperspy/test_tiff_FEI_SEM.tif')
with TiffFile(fname) as tif:
assert tif.is_fei
assert tif.byteorder == '<'
assert len(tif.pages) == 1
assert len(tif.series) == 1
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric != PALETTE
assert page.imagewidth == 1536
assert page.imagelength == 1103
assert page.bitspersample == 8
assert page.samplesperpixel == 1
# assert data and metadata
data = page.asarray()
assert data[563, 320] == 220
fei = tif.fei_metadata
assert fei['User']['User'] == 'supervisor'
assert fei['System']['DisplayHeight'] == 0.324
assert__str__(tif)
###############################################################################
# Test TiffWriter
WRITE_DATA = numpy.arange(3*219*301).astype('uint16').reshape((3, 219, 301))
@pytest.mark.skipif(SKIP_EXTENDED, reason='generates >2 GB')
@pytest.mark.parametrize('shape', [
(219, 301),
(219, 301, 2),
(219, 301, 3),
(219, 301, 4),
(2, 219, 301),
(3, 219, 301),
(4, 219, 301),
(5, 219, 301),
(4, 3, 219, 301),
(4, 219, 301, 3),
(3, 4, 219, 301),
(3, 4, 219, 301, 1)])
@pytest.mark.parametrize('dtype', list('?bhiqfdBHIQFD'))
@pytest.mark.parametrize('byteorder', ['>', '<'])
@pytest.mark.parametrize('bigtiff', ['plaintiff', 'bigtiff'])
@pytest.mark.parametrize('data', ['random', 'empty'])
def test_write(data, byteorder, bigtiff, dtype, shape):
"""Test TiffWriter with various options."""
# TODO: test compression ?
fname = '%s_%s_%s_%s%s' % (
bigtiff,
{'<': 'le', '>': 'be'}[byteorder],
numpy.dtype(dtype).name,
str(shape).replace(' ', ''),
'_empty' if data == 'empty' else '')
bigtiff = bigtiff == 'bigtiff'
with TempFileName(fname) as fname:
if data == 'empty':
with TiffWriter(fname, byteorder=byteorder,
bigtiff=bigtiff) as tif:
tif.save(shape=shape, dtype=dtype)
with TiffFile(fname) as tif:
assert__str__(tif)
image = tif.asarray()
else:
data = random_data(dtype, shape)
imwrite(fname, data, byteorder=byteorder, bigtiff=bigtiff)
image = imread(fname)
assert_array_equal(data.squeeze(), image.squeeze())
assert shape == image.shape
assert dtype == image.dtype
if not bigtiff:
assert_jhove(fname)
def test_write_nopages():
"""Test write TIFF with no pages."""
with TempFileName('nopages') as fname:
with TiffWriter(fname) as tif:
pass
with TiffFile(fname) as tif:
assert len(tif.pages) == 0
tif.asarray()
if VALIDATE:
with pytest.raises(ValueError):
assert_jhove(fname)
def test_write_append_not_exists():
"""Test append to non existing file."""
with TempFileName('append_not_exists.bin') as fname:
# with self.assertRaises(ValueError):
with TiffWriter(fname, append=True):
pass
def test_write_append_nontif():
"""Test fail to append to non-TIFF file."""
with TempFileName('append_nontif.bin') as fname:
with open(fname, 'wb') as fh:
fh.write(b'not a TIFF file')
with pytest.raises(ValueError):
with TiffWriter(fname, append=True):
pass
def test_write_append_lsm():
"""Test fail to append to LSM file."""
fname = data_file('lsm/take1.lsm')
with pytest.raises(ValueError):
with TiffWriter(fname, append=True):
pass
def test_write_append_imwrite():
"""Test append using imwrite."""
data = random_data('uint8', (21, 31))
with TempFileName('imwrite_append') as fname:
imwrite(fname, data, metadata=None)
for _ in range(3):
imwrite(fname, data, append=True, metadata=None)
a = imread(fname)
assert a.shape == (4, 21, 31)
assert_array_equal(a[3], data)
def test_write_append():
"""Test append to existing TIFF file."""
data = random_data('uint8', (21, 31))
with TempFileName('append') as fname:
with TiffWriter(fname) as tif:
pass
with TiffFile(fname) as tif:
assert len(tif.pages) == 0
assert__str__(tif)
with TiffWriter(fname, append=True) as tif:
tif.save(data)
with TiffFile(fname) as tif:
assert len(tif.series) == 1
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.imagewidth == 31
assert page.imagelength == 21
assert__str__(tif)
with TiffWriter(fname, append=True) as tif:
tif.save(data)
tif.save(data)
with TiffFile(fname) as tif:
assert len(tif.series) == 2
assert len(tif.pages) == 3
page = tif.pages[0]
assert page.imagewidth == 31
assert page.imagelength == 21
assert_array_equal(tif.asarray(series=1)[1], data)
assert__str__(tif)
assert_jhove(fname)
def test_write_append_bytesio():
"""Test append to existing TIFF file in BytesIO."""
data = random_data('uint8', (21, 31))
offset = 11
file = BytesIO()
file.write(b'a' * offset)
with TiffWriter(file) as tif:
pass
file.seek(offset)
with TiffFile(file) as tif:
assert len(tif.pages) == 0
file.seek(offset)
with TiffWriter(file, append=True) as tif:
tif.save(data)
file.seek(offset)
with TiffFile(file) as tif:
assert len(tif.series) == 1
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.imagewidth == 31
assert page.imagelength == 21
assert__str__(tif)
file.seek(offset)
with TiffWriter(file, append=True) as tif:
tif.save(data)
tif.save(data)
file.seek(offset)
with TiffFile(file) as tif:
assert len(tif.series) == 2
assert len(tif.pages) == 3
page = tif.pages[0]
assert page.imagewidth == 31
assert page.imagelength == 21
assert_array_equal(tif.asarray(series=1)[1], data)
assert__str__(tif)
def test_write_roundtrip_filename():
"""Test write and read using file name."""
data = imread(data_file('vigranumpy.tif'))
with TempFileName('roundtrip_filename') as fname:
imwrite(fname, data)
assert_array_equal(imread(fname), data)
def test_write_roundtrip_openfile():
"""Test write and read using open file."""
pad = b'0' * 7
data = imread(data_file('vigranumpy.tif'))
with TempFileName('roundtrip_openfile') as fname:
with open(fname, 'wb') as fh:
fh.write(pad)
imwrite(fh, data)
fh.write(pad)
with open(fname, 'rb') as fh:
fh.seek(len(pad))
assert_array_equal(imread(fh), data)
def test_write_roundtrip_bytesio():
"""Test write and read using BytesIO."""
pad = b'0' * 7
data = imread(data_file('vigranumpy.tif'))
buf = BytesIO()
buf.write(pad)
imwrite(buf, data)
buf.write(pad)
buf.seek(len(pad))
assert_array_equal(imread(buf), data)
def test_write_pages():
"""Test write tags for contiguous data in all pages."""
data = random_data('float32', (17, 219, 301))
with TempFileName('pages') as fname:
imwrite(fname, data, photometric='minisblack')
assert_jhove(fname)
# assert file
with TiffFile(fname) as tif:
assert len(tif.pages) == 17
for i, page in enumerate(tif.pages):
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == MINISBLACK
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 1
image = page.asarray()
assert_array_equal(data[i], image)
# assert series
series = tif.series[0]
assert series.offset is not None
image = series.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_truncate():
"""Test only one page is written for truncated files."""
shape = (4, 5, 6, 1)
with TempFileName('truncate') as fname:
imwrite(fname, random_data('uint8', shape), truncate=True)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1 # not 4
page = tif.pages[0]
assert page.is_shaped
assert page.shape == (5, 6)
assert '"shape": [4, 5, 6, 1]' in page.description
assert '"truncated": true' in page.description
series = tif.series[0]
assert series.shape == shape
assert len(series._pages) == 1
assert len(series.pages) == 1
data = tif.asarray()
assert data.shape == shape
assert__str__(tif)
def test_write_is_shaped():
"""Test files are written with shape."""
with TempFileName('is_shaped') as fname:
imwrite(fname, random_data('uint8', (4, 5, 6, 3)))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 4
page = tif.pages[0]
assert page.is_shaped
assert page.description == '{"shape": [4, 5, 6, 3]}'
assert__str__(tif)
with TempFileName('is_shaped_with_description') as fname:
descr = "test is_shaped_with_description"
imwrite(fname, random_data('uint8', (5, 6, 3)), description=descr)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_shaped
assert page.description == descr
assert__str__(tif)
def test_write_extratags():
"""Test write extratags."""
data = random_data('uint8', (2, 219, 301))
description = "Created by TestTiffWriter\nLorem ipsum dolor..."
pagename = "Page name"
extratags = [(270, 's', 0, description, True),
('PageName', 's', 0, pagename, False),
(50001, 'b', 1, b'1', True),
(50002, 'b', 2, b'12', True),
(50004, 'b', 4, b'1234', True),
(50008, 'B', 8, b'12345678', True),
]
with TempFileName('extratags') as fname:
imwrite(fname, data, extratags=extratags)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
assert tif.pages[0].description1 == description
assert 'ImageDescription' not in tif.pages[1].tags
assert tif.pages[0].tags['PageName'].value == pagename
assert tif.pages[1].tags['PageName'].value == pagename
tags = tif.pages[0].tags
assert tags['50001'].value == 49
assert tags['50002'].value == (49, 50)
assert tags['50004'].value == (49, 50, 51, 52)
assert_array_equal(tags['50008'].value, b'12345678')
# (49, 50, 51, 52, 53, 54, 55, 56))
assert__str__(tif)
def test_write_double_tags():
"""Test write single and sequences of doubles."""
# older versions of tifffile do not use offset to write doubles
# reported by Eric Prestat on Feb 21, 2016
data = random_data('uint8', (8, 8))
value = math.pi
extratags = [
(34563, 'd', 1, value, False),
(34564, 'd', 1, (value,), False),
(34565, 'd', 2, (value, value), False),
(34566, 'd', 2, [value, value], False),
(34567, 'd', 2, numpy.array((value, value)), False),
]
with TempFileName('double_tags') as fname:
imwrite(fname, data, extratags=extratags)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
tags = tif.pages[0].tags
assert tags['34563'].value == value
assert tags['34564'].value == value
assert tuple(tags['34565'].value) == (value, value)
assert tuple(tags['34566'].value) == (value, value)
assert tuple(tags['34567'].value) == (value, value)
assert__str__(tif)
with TempFileName('double_tags_bigtiff') as fname:
imwrite(fname, data, bigtiff=True, extratags=extratags)
# assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
tags = tif.pages[0].tags
assert tags['34563'].value == value
assert tags['34564'].value == value
assert tuple(tags['34565'].value) == (value, value)
assert tuple(tags['34566'].value) == (value, value)
assert tuple(tags['34567'].value) == (value, value)
assert__str__(tif)
def test_write_short_tags():
"""Test write single and sequences of words."""
data = random_data('uint8', (8, 8))
value = 65531
extratags = [
(34564, 'H', 1, (value,) * 1, False),
(34565, 'H', 2, (value,) * 2, False),
(34566, 'H', 3, (value,) * 3, False),
(34567, 'H', 4, (value,) * 4, False),
]
with TempFileName('short_tags') as fname:
imwrite(fname, data, extratags=extratags)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
tags = tif.pages[0].tags
assert tags['34564'].value == value
assert tuple(tags['34565'].value) == (value,) * 2
assert tuple(tags['34566'].value) == (value,) * 3
assert tuple(tags['34567'].value) == (value,) * 4
assert__str__(tif)
@pytest.mark.parametrize('subfiletype', [0b1, 0b10, 0b100, 0b1000, 0b1111])
def test_write_subfiletype(subfiletype):
"""Test write subfiletype."""
data = random_data('uint8', (16, 16))
if subfiletype & 0b100:
data = data.astype('bool')
with TempFileName('subfiletype_%i' % subfiletype) as fname:
imwrite(fname, data, subfiletype=subfiletype)
assert_jhove(fname)
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.subfiletype == subfiletype
assert page.is_reduced == subfiletype & 0b1
assert page.is_multipage == subfiletype & 0b10
assert page.is_mask == subfiletype & 0b100
assert page.is_mrc == subfiletype & 0b1000
assert_array_equal(data, page.asarray())
assert__str__(tif)
def test_write_description_tag():
"""Test write two description tags."""
data = random_data('uint8', (2, 219, 301))
description = "Created by TestTiffWriter\nLorem ipsum dolor..."
with TempFileName('description_tag') as fname:
imwrite(fname, data, description=description)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
assert tif.pages[0].description == description
assert tif.pages[0].description1 == '{"shape": [2, 219, 301]}'
assert 'ImageDescription' not in tif.pages[1].tags
assert__str__(tif)
def test_write_description_tag_nojson():
"""Test no JSON description is written with metatata=None."""
data = random_data('uint8', (2, 219, 301))
description = "Created by TestTiffWriter\nLorem ipsum dolor..."
with TempFileName('description_tag_nojson') as fname:
imwrite(fname, data, description=description, metadata=None)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
assert tif.pages[0].description == description
assert 'ImageDescription' not in tif.pages[1].tags
assert 'ImageDescription1' not in tif.pages[0].tags
assert__str__(tif)
def test_write_software_tag():
"""Test write Software tag."""
data = random_data('uint8', (2, 219, 301))
software = "test_tifffile.py"
with TempFileName('software_tag') as fname:
imwrite(fname, data, software=software)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
assert tif.pages[0].software == software
assert 'Software' not in tif.pages[1].tags
assert__str__(tif)
def test_write_resolution_float():
"""Test write float Resolution tag."""
data = random_data('uint8', (2, 219, 301))
resolution = (92., 92.)
with TempFileName('resolution_float') as fname:
imwrite(fname, data, resolution=resolution)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
assert tif.pages[0].tags['XResolution'].value == (92, 1)
assert tif.pages[0].tags['YResolution'].value == (92, 1)
assert tif.pages[1].tags['XResolution'].value == (92, 1)
assert tif.pages[1].tags['YResolution'].value == (92, 1)
assert__str__(tif)
def test_write_resolution_rational():
"""Test write rational Resolution tag."""
data = random_data('uint8', (1, 219, 301))
resolution = ((300, 1), (300, 1))
with TempFileName('resolution_rational') as fname:
imwrite(fname, data, resolution=resolution)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
assert tif.pages[0].tags['XResolution'].value == (300, 1)
assert tif.pages[0].tags['YResolution'].value == (300, 1)
def test_write_resolution_unit():
"""Test write Resolution tag unit."""
data = random_data('uint8', (219, 301))
resolution = (92., (9200, 100), None)
with TempFileName('resolution_unit') as fname:
imwrite(fname, data, resolution=resolution)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
assert tif.pages[0].tags['XResolution'].value == (92, 1)
assert tif.pages[0].tags['YResolution'].value == (92, 1)
assert tif.pages[0].tags['ResolutionUnit'].value == 1
assert__str__(tif)
def test_write_compress_none():
"""Test write compress=0."""
data = WRITE_DATA
with TempFileName('compress_none') as fname:
imwrite(fname, data, compress=0)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.compression == NONE
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
assert len(page.dataoffsets) == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_compress_deflate():
"""Test write ZLIB compression."""
data = WRITE_DATA
with TempFileName('compress_deflate') as fname:
imwrite(fname, data, compress=('DEFLATE', 6))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == DEFLATE
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
assert page.rowsperstrip == 108
assert len(page.dataoffsets) == 9
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_compress_deflate_level():
"""Test write ZLIB compression with level."""
data = WRITE_DATA
with TempFileName('compress_deflate_level') as fname:
imwrite(fname, data, compress=9)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == ADOBE_DEFLATE
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_compress_lzma():
"""Test write LZMA compression."""
data = WRITE_DATA
with TempFileName('compress_lzma') as fname:
imwrite(fname, data, compress='LZMA')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == LZMA
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
assert page.rowsperstrip == 108
assert len(page.dataoffsets) == 9
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
@pytest.mark.skipif(IS_PY2, reason='zstd not available')
def test_write_compress_zstd():
"""Test write ZSTD compression."""
data = WRITE_DATA
with TempFileName('compress_zstd') as fname:
imwrite(fname, data, compress='ZSTD')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == ZSTD
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
assert page.rowsperstrip == 108
assert len(page.dataoffsets) == 9
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_compress_webp():
"""Test write WEBP compression."""
data = WRITE_DATA.astype('uint8').reshape((219, 301, 3))
with TempFileName('compress_webp') as fname:
imwrite(fname, data, compress=('WEBP', -1))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == WEBP
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
@pytest.mark.parametrize('dtype', ['i1', 'u1', 'bool'])
def test_write_compress_packbits(dtype):
"""Test write PackBits compression."""
uncompressed = numpy.frombuffer(
b'\xaa\xaa\xaa\x80\x00\x2a\xaa\xaa\xaa\xaa\x80\x00'
b'\x2a\x22\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa', dtype=dtype)
shape = 2, 7, uncompressed.size
data = numpy.empty(shape, dtype=dtype)
data[..., :] = uncompressed
with TempFileName('compress_packits_%s' % dtype) as fname:
imwrite(fname, data, compress='PACKBITS')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == PACKBITS
assert page.planarconfig == CONTIG
assert page.imagewidth == uncompressed.size
assert page.imagelength == 7
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_compress_rowsperstrip():
"""Test write rowsperstrip with compression."""
data = WRITE_DATA
with TempFileName('compress_rowsperstrip') as fname:
imwrite(fname, data, compress=6, rowsperstrip=32)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == ADOBE_DEFLATE
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
assert page.rowsperstrip == 32
assert len(page.dataoffsets) == 21
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_compress_tiled():
"""Test write compressed tiles."""
data = WRITE_DATA
with TempFileName('compress_tiled') as fname:
imwrite(fname, data, compress=6, tile=(32, 32))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.is_tiled
assert page.compression == ADOBE_DEFLATE
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
assert len(page.dataoffsets) == 210
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_compress_predictor():
"""Test write horizontal differencing."""
data = WRITE_DATA
with TempFileName('compress_predictor') as fname:
imwrite(fname, data, compress=6, predictor=True)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.compression == ADOBE_DEFLATE
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
assert page.predictor == 2
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
@pytest.mark.parametrize('dtype', ['u2', 'f4'])
def test_write_compressed_predictor_tiled(dtype):
"""Test write horizontal differencing with tiles."""
data = WRITE_DATA.astype(dtype)
with TempFileName('compress_tiled_predictor_%s' % dtype) as fname:
imwrite(fname, data, compress=6, predictor=True, tile=(32, 32))
if dtype[0] != 'f':
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.is_tiled
assert page.compression == ADOBE_DEFLATE
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
assert page.predictor == 3 if dtype[0] == 'f' else 2
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
@pytest.mark.skipif(sys.byteorder == 'big', reason="little endian only")
def test_write_write_bigendian():
"""Test write big endian file."""
# also test memory mapping non-native byte order
data = random_data('float32', (2, 3, 219, 301)).newbyteorder()
with TempFileName('write_bigendian') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
assert len(tif.series) == 1
assert tif.byteorder == '>'
# assert not tif.isnative
assert tif.series[0].offset is not None
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
# test reading data
image = tif.asarray()
assert_array_equal(data, image)
image = page.asarray()
assert_array_equal(data[0], image)
# test direct memory mapping; returns big endian array
image = tif.asarray(out='memmap')
assert isinstance(image, numpy.core.memmap)
assert image.dtype == numpy.dtype('>f4')
assert_array_equal(data, image)
del image
image = page.asarray(out='memmap')
assert isinstance(image, numpy.core.memmap)
assert image.dtype == numpy.dtype('>f4')
assert_array_equal(data[0], image)
del image
# test indirect memory mapping; returns native endian array
image = tif.asarray(out='memmap:')
assert isinstance(image, numpy.core.memmap)
assert image.dtype == numpy.dtype('=f4')
assert_array_equal(data, image)
del image
image = page.asarray(out='memmap:')
assert isinstance(image, numpy.core.memmap)
assert image.dtype == numpy.dtype('=f4')
assert_array_equal(data[0], image)
del image
# test 2nd page
page = tif.pages[1]
image = page.asarray(out='memmap')
assert isinstance(image, numpy.core.memmap)
assert image.dtype == numpy.dtype('>f4')
assert_array_equal(data[1], image)
del image
image = page.asarray(out='memmap:')
assert isinstance(image, numpy.core.memmap)
assert image.dtype == numpy.dtype('=f4')
assert_array_equal(data[1], image)
del image
assert__str__(tif)
def test_write_empty_array():
"""Test write empty array fails."""
with pytest.raises(ValueError):
with TempFileName('empty') as fname:
imwrite(fname, numpy.empty(0))
def test_write_pixel():
"""Test write single pixel."""
data = numpy.zeros(1, dtype='uint8')
with TempFileName('pixel') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
assert tif.series[0].axes == 'Y'
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 1
assert page.imagelength == 1
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_small():
"""Test write small image."""
data = random_data('uint8', (1, 1))
with TempFileName('small') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 1
assert page.imagelength == 1
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_2d_as_rgb():
"""Test write RGB color palette as RGB image."""
# image length should be 1
data = numpy.arange(3*256, dtype='uint16').reshape(256, 3) // 3
with TempFileName('2d_as_rgb_contig') as fname:
imwrite(fname, data, photometric=RGB)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
assert tif.series[0].axes == 'XS'
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 256
assert page.imagelength == 1
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_invalid_contig_rgb():
"""Test write planar RGB with 2 samplesperpixel."""
data = random_data('uint8', (219, 301, 2))
with pytest.raises(ValueError):
with TempFileName('invalid_contig_rgb') as fname:
imwrite(fname, data, photometric=RGB)
# default to pages
with TempFileName('invalid_contig_rgb_pages') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 219
assert tif.series[0].axes == 'QYX'
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 2
assert page.imagelength == 301
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
# better save as contig samples
with TempFileName('invalid_contig_rgb_samples') as fname:
imwrite(fname, data, planarconfig='CONTIG')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
assert tif.series[0].axes == 'YXS'
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 2
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_invalid_planar_rgb():
"""Test write planar RGB with 2 samplesperpixel."""
data = random_data('uint8', (2, 219, 301))
with pytest.raises(ValueError):
with TempFileName('invalid_planar_rgb') as fname:
imwrite(fname, data, photometric=RGB, planarconfig='SEPARATE')
# default to pages
with TempFileName('invalid_planar_rgb_pages') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
assert tif.series[0].axes == 'QYX'
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
# or save as planar samples
with TempFileName('invalid_planar_rgb_samples') as fname:
imwrite(fname, data, planarconfig='SEPARATE')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
assert tif.series[0].axes == 'SYX'
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 2
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_gray():
"""Test write grayscale with extrasamples contig."""
data = random_data('uint8', (301, 219, 2))
with TempFileName('extrasamples_gray') as fname:
imwrite(fname, data, extrasamples='UNASSALPHA')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == MINISBLACK
assert page.planarconfig == CONTIG
assert page.imagewidth == 219
assert page.imagelength == 301
assert page.samplesperpixel == 2
assert page.extrasamples == 2
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_gray_planar():
"""Test write planar grayscale with extrasamples."""
data = random_data('uint8', (2, 301, 219))
with TempFileName('extrasamples_gray_planar') as fname:
imwrite(fname, data, planarconfig='SEPARATE',
extrasamples='UNASSALPHA')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == MINISBLACK
assert page.planarconfig == SEPARATE
assert page.imagewidth == 219
assert page.imagelength == 301
assert page.samplesperpixel == 2
assert page.extrasamples == 2
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_gray_mix():
"""Test write grayscale with multiple extrasamples."""
data = random_data('uint8', (301, 219, 4))
with TempFileName('extrasamples_gray_mix') as fname:
imwrite(fname, data, photometric='MINISBLACK',
extrasamples=['ASSOCALPHA', 'UNASSALPHA', 'UNSPECIFIED'])
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == MINISBLACK
assert page.imagewidth == 219
assert page.imagelength == 301
assert page.samplesperpixel == 4
assert page.extrasamples == (1, 2, 0)
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_unspecified():
"""Test write RGB with unspecified extrasamples by default."""
data = random_data('uint8', (301, 219, 5))
with TempFileName('extrasamples_unspecified') as fname:
imwrite(fname, data, photometric='RGB')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == RGB
assert page.imagewidth == 219
assert page.imagelength == 301
assert page.samplesperpixel == 5
assert page.extrasamples == (0, 0)
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_assocalpha():
"""Test write RGB with assocalpha extrasample."""
data = random_data('uint8', (219, 301, 4))
with TempFileName('extrasamples_assocalpha') as fname:
imwrite(fname, data, photometric='RGB', extrasamples='ASSOCALPHA')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 4
assert page.extrasamples == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_mix():
"""Test write RGB with mixture of extrasamples."""
data = random_data('uint8', (219, 301, 6))
with TempFileName('extrasamples_mix') as fname:
imwrite(fname, data, photometric='RGB',
extrasamples=['ASSOCALPHA', 'UNASSALPHA', 'UNSPECIFIED'])
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 6
assert page.extrasamples == (1, 2, 0)
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_contig():
"""Test write contig grayscale with large number of extrasamples."""
data = random_data('uint8', (3, 219, 301))
with TempFileName('extrasamples_contig') as fname:
imwrite(fname, data, planarconfig='CONTIG')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 219
assert page.imagelength == 3
assert page.samplesperpixel == 301
assert len(page.extrasamples) == 301-1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
# better save as RGB planar
with TempFileName('extrasamples_contig_planar') as fname:
imwrite(fname, data, planarconfig='SEPARATE')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_contig_rgb2():
"""Test write contig RGB with large number of extrasamples."""
data = random_data('uint8', (3, 219, 301))
with TempFileName('extrasamples_contig_rgb2') as fname:
imwrite(fname, data, photometric=RGB, planarconfig='CONTIG')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 219
assert page.imagelength == 3
assert page.samplesperpixel == 301
assert len(page.extrasamples) == 301-3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
# better save as planar
with TempFileName('extrasamples_contig_rgb2_planar') as fname:
imwrite(fname, data, photometric=RGB, planarconfig='SEPARATE')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_planar():
"""Test write planar large number of extrasamples."""
data = random_data('uint8', (219, 301, 3))
with TempFileName('extrasamples_planar') as fname:
imwrite(fname, data, planarconfig='SEPARATE')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric != RGB
assert page.imagewidth == 3
assert page.imagelength == 301
assert page.samplesperpixel == 219
assert len(page.extrasamples) == 219-1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_planar_rgb2():
"""Test write planar RGB with large number of extrasamples."""
data = random_data('uint8', (219, 301, 3))
with TempFileName('extrasamples_planar_rgb2') as fname:
imwrite(fname, data, photometric=RGB, planarconfig='SEPARATE')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 3
assert page.imagelength == 301
assert page.samplesperpixel == 219
assert len(page.extrasamples) == 219-3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_minisblack_planar():
"""Test write planar minisblack."""
data = random_data('uint8', (3, 219, 301))
with TempFileName('minisblack_planar') as fname:
imwrite(fname, data, photometric='MINISBLACK')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 3
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_minisblack_contig():
"""Test write contig minisblack."""
data = random_data('uint8', (219, 301, 3))
with TempFileName('minisblack_contig') as fname:
imwrite(fname, data, photometric='MINISBLACK')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 219
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 3
assert page.imagelength == 301
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_scalar():
"""Test write 2D grayscale."""
data = random_data('uint8', (219, 301))
with TempFileName('scalar') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_scalar_3d():
"""Test write 3D grayscale."""
data = random_data('uint8', (63, 219, 301))
with TempFileName('scalar_3d') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 63
page = tif.pages[62]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 1
image = tif.asarray()
assert isinstance(image, numpy.ndarray)
assert_array_equal(data, image)
assert__str__(tif)
def test_write_scalar_4d():
"""Test write 4D grayscale."""
data = random_data('uint8', (3, 2, 219, 301))
with TempFileName('scalar_4d') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 6
page = tif.pages[5]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_contig_extrasample():
"""Test write grayscale with contig extrasamples."""
data = random_data('uint8', (219, 301, 2))
with TempFileName('contig_extrasample') as fname:
imwrite(fname, data, planarconfig='CONTIG')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 2
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_planar_extrasample():
"""Test write grayscale with planar extrasamples."""
data = random_data('uint8', (2, 219, 301))
with TempFileName('planar_extrasample') as fname:
imwrite(fname, data, planarconfig='SEPARATE')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 2
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_rgb_contig():
"""Test write auto contig RGB."""
data = random_data('uint8', (219, 301, 3))
with TempFileName('rgb_contig') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_rgb_planar():
"""Test write auto planar RGB."""
data = random_data('uint8', (3, 219, 301))
with TempFileName('rgb_planar') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_rgba_contig():
"""Test write auto contig RGBA."""
data = random_data('uint8', (219, 301, 4))
with TempFileName('rgba_contig') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 4
assert page.extrasamples == UNASSALPHA
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_rgba_planar():
"""Test write auto planar RGBA."""
data = random_data('uint8', (4, 219, 301))
with TempFileName('rgba_planar') as fname:
imwrite(fname, data)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 4
assert page.extrasamples == UNASSALPHA
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_contig_rgb():
"""Test write contig RGB with extrasamples."""
data = random_data('uint8', (219, 301, 8))
with TempFileName('extrasamples_contig') as fname:
imwrite(fname, data, photometric=RGB)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 8
assert len(page.extrasamples) == 5
assert page.extrasamples[0] == UNSPECIFIED
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_extrasamples_planar_rgb():
"""Test write planar RGB with extrasamples."""
data = random_data('uint8', (8, 219, 301))
with TempFileName('extrasamples_planar') as fname:
imwrite(fname, data, photometric=RGB)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 8
assert len(page.extrasamples) == 5
assert page.extrasamples[0] == UNSPECIFIED
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_tiled_compressed():
"""Test write compressed tiles."""
data = random_data('uint8', (3, 219, 301))
with TempFileName('tiled_compressed') as fname:
imwrite(fname, data, compress=5, tile=(96, 64))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_tiled
assert not page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.tilewidth == 64
assert page.tilelength == 96
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_tiled():
"""Test write tiled."""
data = random_data('uint16', (219, 301))
with TempFileName('tiled') as fname:
imwrite(fname, data, tile=(96, 64))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_tiled
assert not page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.tilewidth == 64
assert page.tilelength == 96
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_tiled_planar():
"""Test write planar tiles."""
data = random_data('uint8', (4, 219, 301))
with TempFileName('tiled_planar') as fname:
imwrite(fname, data, tile=(1, 96, 64)) #
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_tiled
assert not page.is_contiguous
assert page.planarconfig == SEPARATE
assert not page.is_sgi
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.tilewidth == 64
assert page.tilelength == 96
assert page.samplesperpixel == 4
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_tiled_contig():
"""Test write contig tiles."""
data = random_data('uint8', (219, 301, 3))
with TempFileName('tiled_contig') as fname:
imwrite(fname, data, tile=(96, 64))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_tiled
assert not page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.tilewidth == 64
assert page.tilelength == 96
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_tiled_pages():
"""Test write multiple tiled pages."""
data = random_data('uint8', (5, 219, 301, 3))
with TempFileName('tiled_pages') as fname:
imwrite(fname, data, tile=(96, 64))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 5
page = tif.pages[0]
assert page.is_tiled
assert not page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert not page.is_sgi
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.tilewidth == 64
assert page.tilelength == 96
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_volume():
"""Test write tiled volume."""
data = random_data('uint8', (253, 64, 96))
with TempFileName('volume') as fname:
imwrite(fname, data, tile=(64, 64, 64))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_sgi
assert page.is_tiled
assert not page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 96
assert page.imagelength == 64
assert page.imagedepth == 253
assert page.tilewidth == 64
assert page.tilelength == 64
assert page.tiledepth == 64
assert page.samplesperpixel == 1
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_volume_5d_planar_rgb():
"""Test write 5D array as grayscale volumes."""
shape = (2, 3, 256, 64, 96)
data = numpy.empty(shape, dtype='uint8')
data[:] = numpy.arange(256, dtype='uint8').reshape(1, 1, -1, 1, 1)
with TempFileName('volume_5d_planar_rgb') as fname:
imwrite(fname, data, tile=(256, 64, 96))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 2
page = tif.pages[0]
assert page.is_sgi
assert page.is_tiled
assert page.is_contiguous
assert page.planarconfig == SEPARATE
assert page.photometric == RGB
assert page.imagewidth == 96
assert page.imagelength == 64
assert page.imagedepth == 256
assert page.tilewidth == 96
assert page.tilelength == 64
assert page.tiledepth == 256
assert page.samplesperpixel == 3
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 2
assert series.offset is not None
assert series.shape == shape
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
def test_write_volume_5d_contig_rgb():
"""Test write 6D array as contig RGB volumes."""
shape = (2, 3, 256, 64, 96, 3)
data = numpy.empty(shape, dtype='uint8')
data[:] = numpy.arange(256, dtype='uint8').reshape(1, 1, -1, 1, 1, 1)
with TempFileName('volume_5d_contig_rgb') as fname:
imwrite(fname, data, tile=(256, 64, 96))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 6
page = tif.pages[0]
assert page.is_sgi
assert page.is_tiled
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 96
assert page.imagelength == 64
assert page.imagedepth == 256
assert page.tilewidth == 96
assert page.tilelength == 64
assert page.tiledepth == 256
assert page.samplesperpixel == 3
# self.assertEqual(page.tags['TileOffsets'].value, (352,))
assert page.tags['TileByteCounts'].value == (4718592,)
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 6
assert series.offset is not None
assert series.shape == shape
image = tif.asarray()
assert_array_equal(data, image)
# assert iterating over series.pages
data = data.reshape(6, 256, 64, 96, 3)
for i, page in enumerate(series.pages):
image = page.asarray()
assert_array_equal(data[i], image)
assert__str__(tif)
@pytest.mark.skipif(SKIP_EXTENDED, reason='large file')
def test_write_volume_5d_contig_rgb_empty():
"""Test write empty 6D array as contig RGB volumes."""
shape = (2, 3, 256, 64, 96, 3)
with TempFileName('volume_5d_contig_rgb_empty') as fname:
with TiffWriter(fname) as tif:
tif.save(shape=shape, dtype='uint8', tile=(256, 64, 96))
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 6
page = tif.pages[0]
assert page.is_sgi
assert page.is_tiled
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 96
assert page.imagelength == 64
assert page.imagedepth == 256
assert page.tilewidth == 96
assert page.tilelength == 64
assert page.tiledepth == 256
assert page.samplesperpixel == 3
# self.assertEqual(page.tags['TileOffsets'].value, (352,))
assert page.tags['TileByteCounts'].value == (4718592,)
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 6
assert series.offset is not None
assert series.shape == shape
image = tif.asarray()
assert_array_equal(image.shape, shape)
assert__str__(tif)
def test_write_multiple_save():
"""Test append pages."""
data = random_data('uint8', (5, 4, 219, 301, 3))
with TempFileName('append') as fname:
with TiffWriter(fname, bigtiff=True) as tif:
for i in range(data.shape[0]):
tif.save(data[i])
# assert_jhove(fname)
with TiffFile(fname) as tif:
assert tif.is_bigtiff
assert len(tif.pages) == 20
for page in tif.pages:
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 3
image = tif.asarray()
assert_array_equal(data, image)
assert__str__(tif)
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
@pytest.mark.skipif(SKIP_HUGE, reason='3 GB image')
def test_write_3gb():
"""Test write 3 GB no-BigTiff file."""
# https://github.com/blink1073/tifffile/issues/47
data = numpy.empty((4096-32, 1024, 1024), dtype='uint8')
with TempFileName('3gb', remove=False) as fname:
imwrite(fname, data)
assert_jhove(fname)
# assert file
with TiffFile(fname) as tif:
assert not tif.is_bigtiff
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
@pytest.mark.skipif(SKIP_HUGE, reason="5 GB image")
def test_write_bigtiff():
"""Test write 5GB BigTiff file."""
data = numpy.empty((640, 1024, 1024), dtype='float64')
data[:] = numpy.arange(640, dtype='float64').reshape(-1, 1, 1)
with TempFileName('bigtiff') as fname:
# TiffWriter should fail without bigtiff parameter
with pytest.raises(ValueError):
with TiffWriter(fname) as tif:
tif.save(data)
# imwrite should use bigtiff for large data
imwrite(fname, data)
# assert_jhove(fname)
# assert file
with TiffFile(fname) as tif:
assert tif.is_bigtiff
assert len(tif.pages) == 640
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 1024
assert page.imagelength == 1024
assert page.samplesperpixel == 1
image = tif.asarray(out='memmap')
assert_array_equal(data, image)
del image
assert__str__(tif)
@pytest.mark.parametrize('compress', [0, 6])
@pytest.mark.parametrize('dtype', ['uint8', 'uint16'])
def test_write_palette(dtype, compress):
"""Test write palette images."""
data = random_data(dtype, (3, 219, 301))
cmap = random_data('uint16', (3, 2**(data.itemsize*8)))
with TempFileName('palette_%i%s' % (compress, dtype)) as fname:
imwrite(fname, data, colormap=cmap, compress=compress)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 3
page = tif.pages[0]
assert page.is_contiguous != bool(compress)
assert page.planarconfig == CONTIG
assert page.photometric == PALETTE
assert page.imagewidth == 301
assert page.imagelength == 219
assert page.samplesperpixel == 1
for i, page in enumerate(tif.pages):
assert_array_equal(apply_colormap(data[i], cmap),
page.asrgb())
assert__str__(tif)
def test_write_palette_django():
"""Test write palette read from existing file."""
fname = data_file('django.tiff')
with TiffFile(fname) as tif:
page = tif.pages[0]
assert page.photometric == PALETTE
assert page.imagewidth == 320
assert page.imagelength == 480
data = page.asarray() # .squeeze() # UserWarning ...
cmap = page.colormap
assert__str__(tif)
with TempFileName('palette_django') as fname:
imwrite(fname, data, colormap=cmap, compress=6)
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 1
page = tif.pages[0]
assert not page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == PALETTE
assert page.imagewidth == 320
assert page.imagelength == 480
assert page.samplesperpixel == 1
image = page.asrgb(uint8=False)
assert_array_equal(apply_colormap(data, cmap), image)
assert__str__(tif)
def test_write_multiple_series():
"""Test write multiple data into one file using various options."""
data1 = imread(data_file('ome/multi-channel-4D-series.ome.tif'))
image1 = imread(data_file('django.tiff'))
image2 = imread(data_file('horse-16bit-col-littleendian.tif'))
with TempFileName('multiple_series') as fname:
with TiffWriter(fname, bigtiff=False) as tif:
tif.save(image1, compress=5, description='Django')
tif.save(image2)
tif.save(data1[0], metadata=dict(axes='TCZYX'))
for i in range(1, data1.shape[0]):
tif.save(data1[i])
tif.save(data1[0], contiguous=False)
tif.save(data1[0, 0, 0], tile=(64, 64))
tif.save(image1, compress='LZMA', description='lzma')
assert_jhove(fname)
with TiffFile(fname) as tif:
assert len(tif.pages) == 124
assert len(tif.series) == 6
serie = tif.series[0]
assert not serie.offset
assert serie.axes == 'YX'
assert_array_equal(image1, serie.asarray())
serie = tif.series[1]
assert serie.offset
assert serie.axes == 'YXS'
assert_array_equal(image2, serie.asarray())
serie = tif.series[2]
assert serie.offset
assert serie.pages[0].is_contiguous
assert serie.axes == 'TCZYX'
result = serie.asarray(out='memmap')
assert_array_equal(data1, result)
assert tif.filehandle.path == result.filename
del result
serie = tif.series[3]
assert serie.offset
assert serie.axes == 'QQYX'
assert_array_equal(data1[0], serie.asarray())
serie = tif.series[4]
assert not serie.offset
assert serie.axes == 'YX'
assert_array_equal(data1[0, 0, 0], serie.asarray())
serie = tif.series[5]
assert not serie.offset
assert serie.axes == 'YX'
assert_array_equal(image1, serie.asarray())
assert__str__(tif)
###############################################################################
# Test ImageJ writing
@pytest.mark.skipif(SKIP_EXTENDED, reason='many tests')
@pytest.mark.parametrize('shape', [
(219, 301, 1),
(219, 301, 2),
(219, 301, 3),
(219, 301, 4),
(219, 301, 5),
(1, 219, 301),
(2, 219, 301),
(3, 219, 301),
(4, 219, 301),
(5, 219, 301),
(4, 3, 219, 301),
(4, 219, 301, 3),
(3, 4, 219, 301),
(1, 3, 1, 219, 301),
(3, 1, 1, 219, 301),
(1, 3, 4, 219, 301),
(3, 1, 4, 219, 301),
(3, 4, 1, 219, 301),
(3, 4, 1, 219, 301, 3),
(2, 3, 4, 219, 301),
(4, 3, 2, 219, 301, 3)])
@pytest.mark.parametrize('dtype', ['uint8', 'uint16', 'int16', 'float32'])
@pytest.mark.parametrize('byteorder', ['>', '<'])
def test_write_imagej(byteorder, dtype, shape):
"""Test write ImageJ format."""
# TODO: test compression and bigtiff ?
if dtype != 'uint8' and shape[-1] in (3, 4):
pytest.skip('ImageJ only supports uint8 RGB')
data = random_data(dtype, shape)
fname = 'imagej_%s_%s_%s' % (
{'<': 'le', '>': 'be'}[byteorder], dtype, str(shape).replace(' ', ''))
with TempFileName(fname) as fname:
imwrite(fname, data, byteorder=byteorder, imagej=True)
image = imread(fname)
assert_array_equal(data.squeeze(), image.squeeze())
assert_jhove(fname)
def test_write_imagej_voxel_size():
"""Test write ImageJ with xyz voxel size 2.6755x2.6755x3.9474 µm^3."""
data = numpy.zeros((4, 256, 256), dtype='float32')
data.shape = 4, 1, 256, 256
with TempFileName('imagej_voxel_size') as fname:
imwrite(fname, data, imagej=True,
resolution=(0.373759, 0.373759),
metadata={'spacing': 3.947368, 'unit': 'um'})
with TiffFile(fname) as tif:
assert tif.is_imagej
assert 'unit' in tif.imagej_metadata
assert tif.imagej_metadata['unit'] == 'um'
series = tif.series[0]
assert series.axes == 'ZYX'
assert series.shape == (4, 256, 256)
assert__str__(tif)
assert_jhove(fname)
def test_write_imagej_metadata():
"""Test write additional ImageJ metadata."""
data = numpy.empty((4, 256, 256), dtype='uint16')
data[:] = numpy.arange(256*256, dtype='uint16').reshape(1, 256, 256)
with TempFileName('imagej_metadata') as fname:
imwrite(fname, data, imagej=True, metadata={'unit': 'um'})
with TiffFile(fname) as tif:
assert tif.is_imagej
assert 'unit' in tif.imagej_metadata
assert tif.imagej_metadata['unit'] == 'um'
assert__str__(tif)
assert_jhove(fname)
def test_write_imagej_ijmetadata_tag():
"""Test write and read IJMetadata tag."""
fname = data_file('imagej/IJMetadata.tif')
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 3
assert len(tif.series) == 1
data = tif.asarray()
ijmetadata = tif.pages[0].tags['IJMetadata'].value
assert ijmetadata['Info'][:21] == 'FluorescentCells.tif\n'
assert ijmetadata['ROI'][:5] == b'Iout\x00'
assert ijmetadata['Overlays'][1][:5] == b'Iout\x00'
assert ijmetadata['Ranges'] == (0., 255., 0., 255., 0., 255.)
assert ijmetadata['Labels'] == ['Red', 'Green', 'Blue']
assert ijmetadata['LUTs'][2][2, 255] == 255
assert_jhove(fname)
with TempFileName('imagej_ijmetadata') as fname:
imwrite(fname, data, byteorder='>', imagej=True,
metadata={'mode': 'composite'}, ijmetadata=ijmetadata)
with TiffFile(fname) as tif:
assert tif.is_imagej
assert tif.byteorder == '>'
assert len(tif.pages) == 3
assert len(tif.series) == 1
data2 = tif.asarray()
ijmetadata2 = tif.pages[0].tags['IJMetadata'].value
assert__str__(tif)
assert_array_equal(data, data2)
assert ijmetadata2['Info'] == ijmetadata['Info']
assert ijmetadata2['ROI'] == ijmetadata['ROI']
assert ijmetadata2['Overlays'] == ijmetadata['Overlays']
assert ijmetadata2['Ranges'] == ijmetadata['Ranges']
assert ijmetadata2['Labels'] == ijmetadata['Labels']
assert_array_equal(ijmetadata2['LUTs'][2], ijmetadata['LUTs'][2])
assert_jhove(fname)
def test_write_imagej_hyperstack():
"""Test write truncated ImageJ hyperstack."""
shape = (5, 6, 7, 49, 61, 3)
data = numpy.empty(shape, dtype='uint8')
data[:] = numpy.arange(210, dtype='uint8').reshape(5, 6, 7, 1, 1, 1)
with TempFileName('imagej_hyperstack') as fname:
imwrite(fname, data, imagej=True, truncate=True)
# assert file
with TiffFile(fname) as tif:
assert not tif.is_bigtiff
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 61
assert page.imagelength == 49
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == shape
assert len(series._pages) == 1
assert len(series.pages) == 1
assert series.dtype.name == 'uint8'
assert series.axes == 'TZCYXS'
# assert data
image = tif.asarray(out='memmap')
assert_array_equal(data.squeeze(), image.squeeze())
del image
assert__str__(tif)
assert_jhove(fname)
def test_write_imagej_hyperstack_nontrunc():
"""Test write non-truncated ImageJ hyperstack."""
shape = (5, 6, 7, 49, 61, 3)
data = numpy.empty(shape, dtype='uint8')
data[:] = numpy.arange(210, dtype='uint8').reshape(5, 6, 7, 1, 1, 1)
with TempFileName('imagej_hyperstack_nontrunc') as fname:
imwrite(fname, data, imagej=True)
# assert file
with TiffFile(fname) as tif:
assert not tif.is_bigtiff
assert len(tif.pages) == 210
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric == RGB
assert page.imagewidth == 61
assert page.imagelength == 49
assert page.samplesperpixel == 3
# assert series properties
series = tif.series[0]
assert series.shape == shape
assert len(series._pages) == 1
assert len(series.pages) == 210
assert series.dtype.name == 'uint8'
assert series.axes == 'TZCYXS'
# assert data
image = tif.asarray(out='memmap')
assert_array_equal(data.squeeze(), image.squeeze())
del image
# assert iterating over series.pages
data = data.reshape(210, 49, 61, 3)
for i, page in enumerate(series.pages):
image = page.asarray()
assert_array_equal(data[i], image)
assert__str__(tif)
assert_jhove(fname)
def test_write_imagej_append():
"""Test write ImageJ file consecutively."""
data = numpy.empty((256, 1, 256, 256), dtype='uint8')
data[:] = numpy.arange(256, dtype='uint8').reshape(-1, 1, 1, 1)
with TempFileName('imagej_append') as fname:
with TiffWriter(fname, imagej=True) as tif:
for image in data:
tif.save(image)
assert_jhove(fname)
# assert file
with TiffFile(fname) as tif:
assert not tif.is_bigtiff
assert len(tif.pages) == 256
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 256
assert page.imagelength == 256
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert series.shape == (256, 256, 256)
assert series.dtype.name == 'uint8'
assert series.axes == 'ZYX'
# assert data
image = tif.asarray(out='memmap')
assert_array_equal(data.squeeze(), image)
del image
assert__str__(tif)
@pytest.mark.skipif(IS_32BIT, reason='requires 64-bit')
@pytest.mark.skipif(SKIP_HUGE, reason='5 GB image')
def test_write_imagej_raw():
"""Test write ImageJ 5 GB raw file."""
data = numpy.empty((1280, 1, 1024, 1024), dtype='float32')
data[:] = numpy.arange(1280, dtype='float32').reshape(-1, 1, 1, 1)
with TempFileName('imagej_big') as fname:
with pytest.warns(UserWarning):
# UserWarning: truncating ImageJ file
imwrite(fname, data, imagej=True)
assert_jhove(fname)
# assert file
with TiffFile(fname) as tif:
assert not tif.is_bigtiff
assert len(tif.pages) == 1
page = tif.pages[0]
assert page.is_contiguous
assert page.planarconfig == CONTIG
assert page.photometric != RGB
assert page.imagewidth == 1024
assert page.imagelength == 1024
assert page.samplesperpixel == 1
# assert series properties
series = tif.series[0]
assert len(series._pages) == 1
assert len(series.pages) == 1
assert series.shape == (1280, 1024, 1024)
assert series.dtype.name == 'float32'
assert series.axes == 'ZYX'
# assert data
image = tif.asarray(out='memmap')
assert_array_equal(data.squeeze(), image.squeeze())
del image
assert__str__(tif)
###############################################################################
# Test embedded TIFF files
EMBED_NAME = data_file('test_FileHandle.bin')
EMBED_OFFSET = 7077
EMBED_SIZE = 5744
EMBED_OFFSET1 = 13820
EMBED_SIZE1 = 7936382
def assert_embed_tif(tif):
"""Assert embedded TIFF file."""
# 4 series in 6 pages
assert tif.byteorder == '<'
assert len(tif.pages) == 6
assert len(tif.series) == 4
# assert series 0 properties
series = tif.series[0]
assert series.shape == (3, 20, 20)
assert series.dtype.name == 'uint8'
assert series.axes == 'IYX'
page = series.pages[0]
assert page.compression == LZW
assert page.imagewidth == 20
assert page.imagelength == 20
assert page.bitspersample == 8
assert page.samplesperpixel == 1
data = tif.asarray(series=0)
assert isinstance(data, numpy.ndarray)
assert data.shape == (3, 20, 20)
assert data.dtype.name == 'uint8'
assert tuple(data[:, 9, 9]) == (19, 90, 206)
# assert series 1 properties
series = tif.series[1]
assert series.shape == (10, 10, 3)
assert series.dtype.name == 'float32'
assert series.axes == 'YXS'
page = series.pages[0]
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 10
assert page.imagelength == 10
assert page.bitspersample == 32
assert page.samplesperpixel == 3
data = tif.asarray(series=1)
assert isinstance(data, numpy.ndarray)
assert data.shape == (10, 10, 3)
assert data.dtype.name == 'float32'
assert round(abs(data[9, 9, 1]-214.5733642578125), 7) == 0
# assert series 2 properties
series = tif.series[2]
assert series.shape == (20, 20, 3)
assert series.dtype.name == 'uint8'
assert series.axes == 'YXS'
page = series.pages[0]
assert page.photometric == RGB
assert page.compression == LZW
assert page.imagewidth == 20
assert page.imagelength == 20
assert page.bitspersample == 8
assert page.samplesperpixel == 3
data = tif.asarray(series=2)
assert isinstance(data, numpy.ndarray)
assert data.shape == (20, 20, 3)
assert data.dtype.name == 'uint8'
assert tuple(data[9, 9, :]) == (19, 90, 206)
# assert series 3 properties
series = tif.series[3]
assert series.shape == (10, 10)
assert series.dtype.name == 'float32'
assert series.axes == 'YX'
page = series.pages[0]
assert page.compression == LZW
assert page.imagewidth == 10
assert page.imagelength == 10
assert page.bitspersample == 32
assert page.samplesperpixel == 1
data = tif.asarray(series=3)
assert isinstance(data, numpy.ndarray)
assert data.shape == (10, 10)
assert data.dtype.name == 'float32'
assert round(abs(data[9, 9]-223.1648712158203), 7) == 0
assert__str__(tif)
def assert_embed_micromanager(tif):
"""Assert embedded MicroManager TIFF file."""
assert tif.is_ome
assert tif.is_imagej
assert tif.is_micromanager
assert tif.byteorder == '<'
assert len(tif.pages) == 15
assert len(tif.series) == 1
# assert non-tiff micromanager_metadata
tags = tif.micromanager_metadata['Summary']
assert tags["MicroManagerVersion"] == "1.4.x dev"
# assert page properties
page = tif.pages[0]
assert page.is_contiguous
assert page.compression == NONE
assert page.imagewidth == 512
assert page.imagelength == 512
assert page.bitspersample == 16
assert page.samplesperpixel == 1
# two description tags
assert page.description.startswith(' 1:
lsm2bin(argv[1], argv[2] if len(argv) > 2 else None)
else:
print()
print(__doc__.strip())
if __name__ == '__main__':
sys.exit(main())
tifffile-2018.11.28/tifffile/tifffile.py 0000666 0000000 0000000 00001420020 13400336541 015770 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# tifffile.py
# Copyright (c) 2008-2018, Christoph Gohlke
# Copyright (c) 2008-2018, The Regents of the University of California
# Produced at the Laboratory for Fluorescence Dynamics
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Read and write TIFF(r) files.
Tifffile is a Python library to
(1) store numpy arrays in TIFF (Tagged Image File Format) files, and
(2) read image and metadata from TIFF like files used in bioimaging.
Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, STK, LSM, NIH,
SGI, ImageJ, MicroManager, FluoView, ScanImage, SEQ, GEL, SVS, SCN, SIS, ZIF,
QPI, and GeoTIFF files.
Numpy arrays can be written to TIFF, BigTIFF, and ImageJ hyperstack compatible
files in multi-page, memory-mappable, tiled, predicted, or compressed form.
Only a subset of the TIFF specification is supported, mainly uncompressed and
losslessly compressed 1, 8, 16, 32 and 64-bit integer, 16, 32 and 64-bit float,
grayscale and RGB(A) images.
Specifically, reading slices of image data, CCITT and OJPEG compression,
chroma subsampling without JPEG compression, or IPTC and XMP metadata are not
implemented.
TIFF(r), the Tagged Image File Format, is a trademark and under control of
Adobe Systems Incorporated. BigTIFF allows for files greater than 4 GB.
STK, LSM, FluoView, SGI, SEQ, GEL, and OME-TIFF, are custom extensions
defined by Molecular Devices (Universal Imaging Corporation), Carl Zeiss
MicroImaging, Olympus, Silicon Graphics International, Media Cybernetics,
Molecular Dynamics, and the Open Microscopy Environment consortium
respectively.
For command line usage run ``python -m tifffile --help``
:Author:
`Christoph Gohlke `_
:Organization:
Laboratory for Fluorescence Dynamics, University of California, Irvine
:Version: 2018.11.28
Requirements
------------
* `CPython 2.7 or 3.5+ 64-bit `_
* `Numpy 1.14 `_
* `Imagecodecs 2018.11.8 `_
(optional; used for decoding LZW, JPEG, etc.)
* `Matplotlib 2.2 `_ (optional; used for plotting)
* Python 2.7 requires 'futures', 'enum34', and 'pathlib'.
Revisions
---------
2018.11.28
Pass 2739 tests.
Make SubIFDs accessible as TiffPage.pages.
Make parsing of TiffSequence axes pattern optional (backward incompatible).
Limit parsing of TiffSequence axes pattern to file names, not path names.
Do not interpolate in imshow if image dimensions <= 512, else use bilinear.
Use logging.warning instead of warnings.warn in many cases.
Fix numpy FutureWarning for out == 'memmap'.
Adjust ZSTD and WebP compression to libtiff-4.0.10 (WIP).
Decode old style LZW with imagecodecs >= 2018.11.8.
Remove TiffFile.qptiff_metadata (QPI metadata are per page).
Do not use keyword arguments before variable positional arguments.
Make either all or none return statements in a function return expression.
Use pytest parametrize to generate tests.
Replace test classes with functions.
2018.11.6
Rename imsave function to imwrite.
Readd Python implementations of packints, delta, and bitorder codecs.
Fix TiffFrame.compression AttributeError (bug fix).
2018.10.18
Rename tiffile package to tifffile.
2018.10.10
Pass 2710 tests.
Read ZIF, the Zoomable Image Format (WIP).
Decode YCbCr JPEG as RGB (tentative).
Improve restoration of incomplete tiles.
Allow to write grayscale with extrasamples without specifying planarconfig.
Enable decoding of PNG and JXR via imagecodecs.
Deprecate 32-bit platforms (too many memory errors during tests).
2018.9.27
Read Olympus SIS (WIP).
Allow to write non-BigTIFF files up to ~4 GB (bug fix).
Fix parsing date and time fields in SEM metadata (bug fix).
Detect some circular IFD references.
Enable WebP codecs via imagecodecs.
Add option to read TiffSequence from ZIP containers.
Remove TiffFile.isnative.
Move TIFF struct format constants out of TiffFile namespace.
2018.8.31
Pass 2699 tests.
Fix wrong TiffTag.valueoffset (bug fix).
Towards reading Hamamatsu NDPI (WIP).
Enable PackBits compression of byte and bool arrays.
Fix parsing NULL terminated CZ_SEM strings.
2018.8.24
Move tifffile.py and related modules into tiffile package.
Move usage examples to module docstring.
Enable multi-threading for compressed tiles and pages by default.
Add option to concurrently decode image tiles using threads.
Do not skip empty tiles (bug fix).
Read JPEG and J2K compressed strips and tiles.
Allow floating point predictor on write.
Add option to specify subfiletype on write.
Depend on imagecodecs package instead of _tifffile, lzma, etc modules.
Remove reverse_bitorder, unpack_ints, and decode functions.
Use pytest instead of unittest.
2018.6.20
Save RGBA with unassociated extrasample by default (backward incompatible).
Add option to specify ExtraSamples values.
2018.6.17
Pass 2680 tests.
Towards reading JPEG and other compressions via imagecodecs package (WIP).
Read SampleFormat VOID as UINT.
Add function to validate TIFF using 'jhove -m TIFF-hul'.
Save bool arrays as bilevel TIFF.
Accept pathlib.Path as filenames.
Move 'software' argument from TiffWriter __init__ to save.
Raise DOS limit to 16 TB.
Lazy load lzma and zstd compressors and decompressors.
Add option to save IJMetadata tags.
Return correct number of pages for truncated series (bug fix).
Move EXIF tags to TIFF.TAG as per TIFF/EP standard.
2018.2.18
Pass 2293 tests.
Always save RowsPerStrip and Resolution tags as required by TIFF standard.
Do not use badly typed ImageDescription.
Coherce bad ASCII string tags to bytes.
Tuning of __str__ functions.
Fix reading 'undefined' tag values (bug fix).
Read and write ZSTD compressed data.
Use hexdump to print byte strings.
Determine TIFF byte order from data dtype in imsave.
Add option to specify RowsPerStrip for compressed strips.
Allow memory-map of arrays with non-native byte order.
Attempt to handle ScanImage <= 5.1 files.
Restore TiffPageSeries.pages sequence interface.
Use numpy.frombuffer instead of fromstring to read from binary data.
Parse GeoTIFF metadata.
Add option to apply horizontal differencing before compression.
Towards reading PerkinElmer QPI (QPTIFF, no test files).
Do not index out of bounds data in tifffile.c unpackbits and decodelzw.
2017.9.29 (tentative)
Many backward incompatible changes improving speed and resource usage:
Pass 2268 tests.
Add detail argument to __str__ function. Remove info functions.
Fix potential issue correcting offsets of large LSM files with positions.
Remove TiffFile sequence interface; use TiffFile.pages instead.
Do not make tag values available as TiffPage attributes.
Use str (not bytes) type for tag and metadata strings (WIP).
Use documented standard tag and value names (WIP).
Use enums for some documented TIFF tag values.
Remove 'memmap' and 'tmpfile' options; use out='memmap' instead.
Add option to specify output in asarray functions.
Add option to concurrently decode pages using threads.
Add TiffPage.asrgb function (WIP).
Do not apply colormap in asarray.
Remove 'colormapped', 'rgbonly', and 'scale_mdgel' options from asarray.
Consolidate metadata in TiffFile _metadata functions.
Remove non-tag metadata properties from TiffPage.
Add function to convert LSM to tiled BIN files.
Align image data in file.
Make TiffPage.dtype a numpy.dtype.
Add 'ndim' and 'size' properties to TiffPage and TiffPageSeries.
Allow imsave to write non-BigTIFF files up to ~4 GB.
Only read one page for shaped series if possible.
Add memmap function to create memory-mapped array stored in TIFF file.
Add option to save empty arrays to TIFF files.
Add option to save truncated TIFF files.
Allow single tile images to be saved contiguously.
Add optional movie mode for files with uniform pages.
Lazy load pages.
Use lightweight TiffFrame for IFDs sharing properties with key TiffPage.
Move module constants to 'TIFF' namespace (speed up module import).
Remove 'fastij' option from TiffFile.
Remove 'pages' parameter from TiffFile.
Remove TIFFfile alias.
Deprecate Python 2.
Require enum34 and futures packages on Python 2.7.
Remove Record class and return all metadata as dict instead.
Add functions to parse STK, MetaSeries, ScanImage, SVS, Pilatus metadata.
Read tags from EXIF and GPS IFDs.
Use pformat for tag and metadata values.
Fix reading some UIC tags (bug fix).
Do not modify input array in imshow (bug fix).
Fix Python implementation of unpack_ints.
2017.5.23
Pass 1961 tests.
Write correct number of SampleFormat values (bug fix).
Use Adobe deflate code to write ZIP compressed files.
Add option to pass tag values as packed binary data for writing.
Defer tag validation to attribute access.
Use property instead of lazyattr decorator for simple expressions.
2017.3.17
Write IFDs and tag values on word boundaries.
Read ScanImage metadata.
Remove is_rgb and is_indexed attributes from TiffFile.
Create files used by doctests.
2017.1.12
Read Zeiss SEM metadata.
Read OME-TIFF with invalid references to external files.
Rewrite C LZW decoder (5x faster).
Read corrupted LSM files missing EOI code in LZW stream.
2017.1.1
Add option to append images to existing TIFF files.
Read files without pages.
Read S-FEG and Helios NanoLab tags created by FEI software.
Allow saving Color Filter Array (CFA) images.
Add info functions returning more information about TiffFile and TiffPage.
Add option to read specific pages only.
Remove maxpages argument (backward incompatible).
Remove test_tifffile function.
2016.10.28
Pass 1944 tests.
Improve detection of ImageJ hyperstacks.
Read TVIPS metadata created by EM-MENU (by Marco Oster).
Add option to disable using OME-XML metadata.
Allow non-integer range attributes in modulo tags (by Stuart Berg).
2016.6.21
Do not always memmap contiguous data in page series.
2016.5.13
Add option to specify resolution unit.
Write grayscale images with extra samples when planarconfig is specified.
Do not write RGB color images with 2 samples.
Reorder TiffWriter.save keyword arguments (backward incompatible).
2016.4.18
Pass 1932 tests.
TiffWriter, imread, and imsave accept open binary file streams.
2016.04.13
Correctly handle reversed fill order in 2 and 4 bps images (bug fix).
Implement reverse_bitorder in C.
2016.03.18
Fix saving additional ImageJ metadata.
2016.2.22
Pass 1920 tests.
Write 8 bytes double tag values using offset if necessary (bug fix).
Add option to disable writing second image description tag.
Detect tags with incorrect counts.
Disable color mapping for LSM.
2015.11.13
Read LSM 6 mosaics.
Add option to specify directory of memory-mapped files.
Add command line options to specify vmin and vmax values for colormapping.
2015.10.06
New helper function to apply colormaps.
Renamed is_palette attributes to is_indexed (backward incompatible).
Color-mapped samples are now contiguous (backward incompatible).
Do not color-map ImageJ hyperstacks (backward incompatible).
Towards reading Leica SCN.
2015.9.25
Read images with reversed bit order (FillOrder is LSB2MSB).
2015.9.21
Read RGB OME-TIFF.
Warn about malformed OME-XML.
2015.9.16
Detect some corrupted ImageJ metadata.
Better axes labels for 'shaped' files.
Do not create TiffTag for default values.
Chroma subsampling is not supported.
Memory-map data in TiffPageSeries if possible (optional).
2015.8.17
Pass 1906 tests.
Write ImageJ hyperstacks (optional).
Read and write LZMA compressed data.
Specify datetime when saving (optional).
Save tiled and color-mapped images (optional).
Ignore void bytecounts and offsets if possible.
Ignore bogus image_depth tag created by ISS Vista software.
Decode floating point horizontal differencing (not tiled).
Save image data contiguously if possible.
Only read first IFD from ImageJ files if possible.
Read ImageJ 'raw' format (files larger than 4 GB).
TiffPageSeries class for pages with compatible shape and data type.
Try to read incomplete tiles.
Open file dialog if no filename is passed on command line.
Ignore errors when decoding OME-XML.
Rename decoder functions (backward incompatible).
2014.8.24
TiffWriter class for incremental writing images.
Simplify examples.
2014.8.19
Add memmap function to FileHandle.
Add function to determine if image data in TiffPage is memory-mappable.
Do not close files if multifile_close parameter is False.
2014.8.10
Pass 1730 tests.
Return all extrasamples by default (backward incompatible).
Read data from series of pages into memory-mapped array (optional).
Squeeze OME dimensions (backward incompatible).
Workaround missing EOI code in strips.
Support image and tile depth tags (SGI extension).
Better handling of STK/UIC tags (backward incompatible).
Disable color mapping for STK.
Julian to datetime converter.
TIFF ASCII type may be NULL separated.
Unwrap strip offsets for LSM files greater than 4 GB.
Correct strip byte counts in compressed LSM files.
Skip missing files in OME series.
Read embedded TIFF files.
2014.2.05
Save rational numbers as type 5 (bug fix).
2013.12.20
Keep other files in OME multi-file series closed.
FileHandle class to abstract binary file handle.
Disable color mapping for bad OME-TIFF produced by bio-formats.
Read bad OME-XML produced by ImageJ when cropping.
2013.11.3
Allow zlib compress data in imsave function (optional).
Memory-map contiguous image data (optional).
2013.10.28
Read MicroManager metadata and little-endian ImageJ tag.
Save extra tags in imsave function.
Save tags in ascending order by code (bug fix).
2012.10.18
Accept file like objects (read from OIB files).
2012.8.21
Rename TIFFfile to TiffFile and TIFFpage to TiffPage.
TiffSequence class for reading sequence of TIFF files.
Read UltraQuant tags.
Allow float numbers as resolution in imsave function.
2012.8.3
Read MD GEL tags and NIH Image header.
2012.7.25
Read ImageJ tags.
...
Notes
-----
The API is not stable yet and might change between revisions.
Tested on little-endian platforms only.
Python 2.7, 3.4, and 32-bit versions are deprecated.
Other libraries for reading scientific TIFF files from Python:
* `Python-bioformats `_
* `Imread `_
* `GDAL `_
* `OpenSlide-python `_
* `PyLibTiff `_
* `SimpleITK `_
* `PyLSM `_
* `PyMca.TiffIO.py `_ (same as fabio.TiffIO)
* `BioImageXD.Readers `_
* `Cellcognition.io `_
* `pymimage `_
* `pytiff `_
Acknowledgements
----------------
* Egor Zindy, University of Manchester, for lsm_scan_info specifics.
* Wim Lewis for a bug fix and some LSM functions.
* Hadrien Mary for help on reading MicroManager files.
* Christian Kliche for help writing tiled and color-mapped files.
References
----------
1) TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated.
https://www.adobe.io/open/standards/TIFF.html
2) TIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html
3) MetaMorph Stack (STK) Image File Format.
http://mdc.custhelp.com/app/answers/detail/a_id/18862
4) Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010).
Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011
5) The OME-TIFF format.
https://docs.openmicroscopy.org/ome-model/5.6.4/ome-tiff/
6) UltraQuant(r) Version 6.0 for Windows Start-Up Guide.
http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf
7) Micro-Manager File Formats.
https://micro-manager.org/wiki/Micro-Manager_File_Formats
8) Tags for TIFF and Related Specifications. Digital Preservation.
https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml
9) ScanImage BigTiff Specification - ScanImage 2016.
http://scanimage.vidriotechnologies.com/display/SI2016/
ScanImage+BigTiff+Specification
10) CIPA DC-008-2016: Exchangeable image file format for digital still cameras:
Exif Version 2.31.
http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf
11) ZIF, the Zoomable Image File format. http://zif.photo/
Examples
--------
Save a 3D numpy array to a multi-page, 16-bit grayscale TIFF file:
>>> data = numpy.random.randint(0, 2**16, (4, 301, 219), 'uint16')
>>> imwrite('temp.tif', data, photometric='minisblack')
Read the whole image stack from the TIFF file as numpy array:
>>> image_stack = imread('temp.tif')
>>> image_stack.shape
(4, 301, 219)
>>> image_stack.dtype
dtype('uint16')
Read the image from first page (IFD) in the TIFF file:
>>> image = imread('temp.tif', key=0)
>>> image.shape
(301, 219)
Read images from a sequence of TIFF files as numpy array:
>>> image_sequence = imread(['temp.tif', 'temp.tif'])
>>> image_sequence.shape
(2, 4, 301, 219)
Save a numpy array to a single-page RGB TIFF file:
>>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8')
>>> imwrite('temp.tif', data, photometric='rgb')
Save a floating-point array and metadata, using zlib compression:
>>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32')
>>> imwrite('temp.tif', data, compress=6, metadata={'axes': 'TZCYX'})
Save a volume with xyz voxel size 2.6755x2.6755x3.9474 µm^3 to ImageJ file:
>>> volume = numpy.random.randn(57*256*256).astype('float32')
>>> volume.shape = 1, 57, 1, 256, 256, 1 # dimensions in TZCYXS order
>>> imwrite('temp.tif', volume, imagej=True, resolution=(1./2.6755, 1./2.6755),
... metadata={'spacing': 3.947368, 'unit': 'um'})
Read hyperstack and metadata from ImageJ file:
>>> with TiffFile('temp.tif') as tif:
... imagej_hyperstack = tif.asarray()
... imagej_metadata = tif.imagej_metadata
>>> imagej_hyperstack.shape
(57, 256, 256)
>>> imagej_metadata['slices']
57
Create an empty TIFF file and write to the memory-mapped numpy array:
>>> memmap_image = memmap('temp.tif', shape=(256, 256), dtype='float32')
>>> memmap_image[255, 255] = 1.0
>>> memmap_image.flush()
>>> memmap_image.shape, memmap_image.dtype
((256, 256), dtype('float32'))
>>> del memmap_image
Memory-map image data in the TIFF file:
>>> memmap_image = memmap('temp.tif', page=0)
>>> memmap_image[255, 255]
1.0
>>> del memmap_image
Successively append images to a BigTIFF file:
>>> data = numpy.random.randint(0, 255, (5, 2, 3, 301, 219), 'uint8')
>>> with TiffWriter('temp.tif', bigtiff=True) as tif:
... for i in range(data.shape[0]):
... tif.save(data[i], compress=6, photometric='minisblack')
Iterate over pages and tags in the TIFF file and successively read images:
>>> with TiffFile('temp.tif') as tif:
... image_stack = tif.asarray()
... for page in tif.pages:
... for tag in page.tags.values():
... tag_name, tag_value = tag.name, tag.value
... image = page.asarray()
Save two image series to a TIFF file:
>>> data0 = numpy.random.randint(0, 255, (301, 219, 3), 'uint8')
>>> data1 = numpy.random.randint(0, 255, (5, 301, 219), 'uint16')
>>> with TiffWriter('temp.tif') as tif:
... tif.save(data0, compress=6, photometric='rgb')
... tif.save(data1, compress=6, photometric='minisblack')
Read the second image series from the TIFF file:
>>> series1 = imread('temp.tif', series=1)
>>> series1.shape
(5, 301, 219)
Read an image stack from a sequence of TIFF files with a file name pattern:
>>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64))
>>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64))
>>> image_sequence = TiffSequence('temp_C001*.tif', pattern='axes')
>>> image_sequence.shape
(1, 2)
>>> image_sequence.axes
'CT'
>>> data = image_sequence.asarray()
>>> data.shape
(1, 2, 64, 64)
"""
from __future__ import division, print_function
__version__ = '2018.11.28'
__docformat__ = 'restructuredtext en'
__all__ = ('imwrite', 'imsave', 'imread', 'imshow', 'memmap',
'TiffFile', 'TiffWriter', 'TiffSequence', 'FileHandle',
'TiffPage', 'TiffFrame', 'TiffTag', 'TIFF',
# utility functions used by oiffile, czifile, etc
'lazyattr', 'natural_sorted', 'stripnull', 'transpose_axes',
'squeeze_axes', 'create_output', 'repeat_nd', 'format_size',
'product', 'xml2dict', 'pformat', 'str2bytes', '_app_show')
import sys
import os
import io
import re
import glob
import math
import time
import json
import enum
import struct
import pathlib
import logging
import warnings
import binascii
import datetime
import threading
import collections
import concurrent.futures
import numpy
try:
import imagecodecs
except ImportError:
imagecodecs = None
# delay import of mmap, pprint, fractions, xml, tkinter, lxml, matplotlib,
# subprocess, multiprocessing, tempfile, zipfile, fnmatch
def imread(files, **kwargs):
"""Return image data from TIFF file(s) as numpy array.
Refer to the TiffFile class and member functions for documentation.
Parameters
----------
files : str, binary stream, or sequence
File name, seekable binary stream, glob pattern, or sequence of
file names.
kwargs : dict
Parameters 'multifile' and 'is_ome' are passed to the TiffFile
constructor.
The 'pattern' parameter is passed to the TiffSequence constructor.
Other parameters are passed to the asarray functions.
The first image series is returned if no arguments are provided.
"""
kwargs_file = parse_kwargs(kwargs, 'multifile', 'is_ome')
kwargs_seq = parse_kwargs(kwargs, 'pattern')
if isinstance(files, basestring) and any(i in files for i in '?*'):
files = glob.glob(files)
if not files:
raise ValueError('no files found')
if not hasattr(files, 'seek') and len(files) == 1:
files = files[0]
if isinstance(files, basestring) or hasattr(files, 'seek'):
with TiffFile(files, **kwargs_file) as tif:
return tif.asarray(**kwargs)
else:
with TiffSequence(files, **kwargs_seq) as imseq:
return imseq.asarray(**kwargs)
def imwrite(file, data=None, shape=None, dtype=None, bigsize=2**32-2**25,
**kwargs):
"""Write numpy array to TIFF file.
Refer to the TiffWriter class and member functions for documentation.
Parameters
----------
file : str or binary stream
File name or writable binary stream, such as an open file or BytesIO.
data : array_like
Input image. The last dimensions are assumed to be image depth,
height, width, and samples.
If None, an empty array of the specified shape and dtype is
saved to file.
Unless 'byteorder' is specified in 'kwargs', the TIFF file byte order
is determined from the data's dtype or the dtype argument.
shape : tuple
If 'data' is None, shape of an empty array to save to the file.
dtype : numpy.dtype
If 'data' is None, data-type of an empty array to save to the file.
bigsize : int
Create a BigTIFF file if the size of data in bytes is larger than
this threshold and 'imagej' or 'truncate' are not enabled.
By default, the threshold is 4 GB minus 32 MB reserved for metadata.
Use the 'bigtiff' parameter to explicitly specify the type of
file created.
kwargs : dict
Parameters 'append', 'byteorder', 'bigtiff', and 'imagej', are passed
to TiffWriter(). Other parameters are passed to TiffWriter.save().
Returns
-------
offset, bytecount : tuple or None
If the image data are written contiguously, return offset and bytecount
of image data in the file.
"""
tifargs = parse_kwargs(kwargs, 'append', 'bigtiff', 'byteorder', 'imagej')
if data is None:
size = product(shape) * numpy.dtype(dtype).itemsize
byteorder = numpy.dtype(dtype).byteorder
else:
try:
size = data.nbytes
byteorder = data.dtype.byteorder
except Exception:
size = 0
byteorder = None
if size > bigsize and 'bigtiff' not in tifargs and not (
tifargs.get('imagej', False) or tifargs.get('truncate', False)):
tifargs['bigtiff'] = True
if 'byteorder' not in tifargs:
tifargs['byteorder'] = byteorder
with TiffWriter(file, **tifargs) as tif:
return tif.save(data, shape, dtype, **kwargs)
imsave = imwrite
def memmap(filename, shape=None, dtype=None, page=None, series=0, mode='r+',
**kwargs):
"""Return memory-mapped numpy array stored in TIFF file.
Memory-mapping requires data stored in native byte order, without tiling,
compression, predictors, etc.
If 'shape' and 'dtype' are provided, existing files will be overwritten or
appended to depending on the 'append' parameter.
Otherwise the image data of a specified page or series in an existing
file will be memory-mapped. By default, the image data of the first page
series is memory-mapped.
Call flush() to write any changes in the array to the file.
Raise ValueError if the image data in the file is not memory-mappable.
Parameters
----------
filename : str
Name of the TIFF file which stores the array.
shape : tuple
Shape of the empty array.
dtype : numpy.dtype
Data-type of the empty array.
page : int
Index of the page which image data to memory-map.
series : int
Index of the page series which image data to memory-map.
mode : {'r+', 'r', 'c'}
The file open mode. Default is to open existing file for reading and
writing ('r+').
kwargs : dict
Additional parameters passed to imwrite() or TiffFile().
"""
if shape is not None and dtype is not None:
# create a new, empty array
kwargs.update(data=None, shape=shape, dtype=dtype, returnoffset=True,
align=TIFF.ALLOCATIONGRANULARITY)
result = imwrite(filename, **kwargs)
if result is None:
# TODO: fail before creating file or writing data
raise ValueError('image data are not memory-mappable')
offset = result[0]
else:
# use existing file
with TiffFile(filename, **kwargs) as tif:
if page is not None:
page = tif.pages[page]
if not page.is_memmappable:
raise ValueError('image data are not memory-mappable')
offset, _ = page.is_contiguous
shape = page.shape
dtype = page.dtype
else:
series = tif.series[series]
if series.offset is None:
raise ValueError('image data are not memory-mappable')
shape = series.shape
dtype = series.dtype
offset = series.offset
dtype = tif.byteorder + dtype.char
return numpy.memmap(filename, dtype, mode, offset, shape, 'C')
class lazyattr(object):
"""Attribute whose value is computed on first access."""
# TODO: help() doesn't work
__slots__ = ('func',)
def __init__(self, func):
self.func = func
# self.__name__ = func.__name__
# self.__doc__ = func.__doc__
# self.lock = threading.RLock()
def __get__(self, instance, owner):
# with self.lock:
if instance is None:
return self
try:
value = self.func(instance)
except AttributeError as e:
raise RuntimeError(e)
if value is NotImplemented:
return getattr(super(owner, instance), self.func.__name__)
setattr(instance, self.func.__name__, value)
return value
class TiffWriter(object):
"""Write numpy arrays to TIFF file.
TiffWriter instances must be closed using the 'close' method, which is
automatically called when using the 'with' context manager.
TiffWriter's main purpose is saving nD numpy array's as TIFF,
not to create any possible TIFF format. Specifically, JPEG compression,
SubIFDs, ExifIFD, or GPSIFD tags are not supported.
"""
def __init__(self, file, bigtiff=False, byteorder=None, append=False,
imagej=False):
"""Open a TIFF file for writing.
An empty TIFF file is created if the file does not exist, else the
file is overwritten with an empty TIFF file unless 'append'
is true. Use bigtiff=True when creating files larger than 4 GB.
Parameters
----------
file : str, binary stream, or FileHandle
File name or writable binary stream, such as an open file
or BytesIO.
bigtiff : bool
If True, the BigTIFF format is used.
byteorder : {'<', '>', '=', '|'}
The endianness of the data in the file.
By default, this is the system's native byte order.
append : bool
If True and 'file' is an existing standard TIFF file, image data
and tags are appended to the file.
Appending data may corrupt specifically formatted TIFF files
such as LSM, STK, ImageJ, NIH, or FluoView.
imagej : bool
If True, write an ImageJ hyperstack compatible file.
This format can handle data types uint8, uint16, or float32 and
data shapes up to 6 dimensions in TZCYXS order.
RGB images (S=3 or S=4) must be uint8.
ImageJ's default byte order is big-endian but this implementation
uses the system's native byte order by default.
ImageJ hyperstacks do not support BigTIFF or compression.
The ImageJ file format is undocumented.
When using compression, use ImageJ's Bio-Formats import function.
"""
if append:
# determine if file is an existing TIFF file that can be extended
try:
with FileHandle(file, mode='rb', size=0) as fh:
pos = fh.tell()
try:
with TiffFile(fh) as tif:
if (append != 'force' and
any(getattr(tif, 'is_'+a) for a in (
'lsm', 'stk', 'imagej', 'nih',
'fluoview', 'micromanager'))):
raise ValueError('file contains metadata')
byteorder = tif.byteorder
bigtiff = tif.is_bigtiff
self._ifdoffset = tif.pages.next_page_offset
except Exception as e:
raise ValueError('cannot append to file: %s' % str(e))
finally:
fh.seek(pos)
except (IOError, FileNotFoundError):
append = False
if byteorder in (None, '=', '|'):
byteorder = '<' if sys.byteorder == 'little' else '>'
elif byteorder not in ('<', '>'):
raise ValueError('invalid byteorder %s' % byteorder)
if imagej and bigtiff:
warnings.warn('writing incompatible BigTIFF ImageJ')
self._byteorder = byteorder
self._imagej = bool(imagej)
self._truncate = False
self._metadata = None
self._colormap = None
self._descriptionoffset = 0
self._descriptionlen = 0
self._descriptionlenoffset = 0
self._tags = None
self._shape = None # normalized shape of data in consecutive pages
self._datashape = None # shape of data in consecutive pages
self._datadtype = None # data type
self._dataoffset = None # offset to data
self._databytecounts = None # byte counts per plane
self._tagoffsets = None # strip or tile offset tag code
if bigtiff:
self._bigtiff = True
self._offsetsize = 8
self._tagsize = 20
self._tagnoformat = 'Q'
self._offsetformat = 'Q'
self._valueformat = '8s'
else:
self._bigtiff = False
self._offsetsize = 4
self._tagsize = 12
self._tagnoformat = 'H'
self._offsetformat = 'I'
self._valueformat = '4s'
if append:
self._fh = FileHandle(file, mode='r+b', size=0)
self._fh.seek(0, 2)
else:
self._fh = FileHandle(file, mode='wb', size=0)
self._fh.write({'<': b'II', '>': b'MM'}[byteorder])
if bigtiff:
self._fh.write(struct.pack(byteorder+'HHH', 43, 8, 0))
else:
self._fh.write(struct.pack(byteorder+'H', 42))
# first IFD
self._ifdoffset = self._fh.tell()
self._fh.write(struct.pack(byteorder+self._offsetformat, 0))
def save(self, data=None, shape=None, dtype=None, returnoffset=False,
photometric=None, planarconfig=None, extrasamples=None, tile=None,
contiguous=True, align=16, truncate=False, compress=0,
rowsperstrip=None, predictor=False, colormap=None,
description=None, datetime=None, resolution=None, subfiletype=0,
software='tifffile.py', metadata={}, ijmetadata=None,
extratags=()):
"""Write numpy array and tags to TIFF file.
The data shape's last dimensions are assumed to be image depth,
height (length), width, and samples.
If a colormap is provided, the data's dtype must be uint8 or uint16
and the data values are indices into the last dimension of the
colormap.
If 'shape' and 'dtype' are specified, an empty array is saved.
This option cannot be used with compression or multiple tiles.
Image data are written uncompressed in one strip per plane by default.
Dimensions larger than 2 to 4 (depending on photometric mode, planar
configuration, and SGI mode) are flattened and saved as separate pages.
The SampleFormat and BitsPerSample tags are derived from the data type.
Parameters
----------
data : numpy.ndarray or None
Input image array.
shape : tuple or None
Shape of the empty array to save. Used only if 'data' is None.
dtype : numpy.dtype or None
Data-type of the empty array to save. Used only if 'data' is None.
returnoffset : bool
If True and the image data in the file is memory-mappable, return
the offset and number of bytes of the image data in the file.
photometric : {'MINISBLACK', 'MINISWHITE', 'RGB', 'PALETTE', 'CFA'}
The color space of the image data.
By default, this setting is inferred from the data shape and the
value of colormap.
For CFA images, DNG tags must be specified in 'extratags'.
planarconfig : {'CONTIG', 'SEPARATE'}
Specifies if samples are stored contiguous or in separate planes.
By default, this setting is inferred from the data shape.
If this parameter is set, extra samples are used to store grayscale
images.
'CONTIG': last dimension contains samples.
'SEPARATE': third last dimension contains samples.
extrasamples : tuple of {'UNSPECIFIED', 'ASSOCALPHA', 'UNASSALPHA'}
Defines the interpretation of extra components in pixels.
'UNSPECIFIED': no transparency information (default).
'ASSOCALPHA': single, true transparency with pre-multiplied color.
'UNASSALPHA': independent transparency masks.
tile : tuple of int
The shape (depth, length, width) of image tiles to write.
If None (default), image data are written in strips.
The tile length and width must be a multiple of 16.
If the tile depth is provided, the SGI ImageDepth and TileDepth
tags are used to save volume data.
Unless a single tile is used, tiles cannot be used to write
contiguous files.
Few software can read the SGI format, e.g. MeVisLab.
contiguous : bool
If True (default) and the data and parameters are compatible with
previous ones, if any, the image data are stored contiguously after
the previous one. Parameters 'photometric' and 'planarconfig'
are ignored. Parameters 'description', datetime', and 'extratags'
are written to the first page of a contiguous series only.
align : int
Byte boundary on which to align the image data in the file.
Default 16. Use mmap.ALLOCATIONGRANULARITY for memory-mapped data.
Following contiguous writes are not aligned.
truncate : bool
If True, only write the first page including shape metadata if
possible (uncompressed, contiguous, not tiled).
Other TIFF readers will only be able to read part of the data.
compress : int or str or (str, int)
If 0 (default), data are written uncompressed.
If 0-9, the level of ADOBE_DEFLATE compression.
If a str, one of TIFF.COMPRESSION, e.g. 'LZMA' or 'ZSTD'.
If a tuple, first item is one of TIFF.COMPRESSION and second item
is compression level.
Compression cannot be used to write contiguous files.
rowsperstrip : int
The number of rows per strip used for compression.
Uncompressed data are written in one strip per plane.
predictor : bool
If True, apply horizontal differencing or floating point predictor
before compression.
colormap : numpy.ndarray
RGB color values for the corresponding data value.
Must be of shape (3, 2**(data.itemsize*8)) and dtype uint16.
description : str
The subject of the image. Must be 7-bit ASCII. Cannot be used with
the ImageJ format. Saved with the first page only.
datetime : datetime
Date and time of image creation in '%Y:%m:%d %H:%M:%S' format.
If None (default), the current date and time is used.
Saved with the first page only.
resolution : (float, float[, str]) or ((int, int), (int, int)[, str])
X and Y resolutions in pixels per resolution unit as float or
rational numbers. A third, optional parameter specifies the
resolution unit, which must be None (default for ImageJ),
'INCH' (default), or 'CENTIMETER'.
subfiletype : int
Bitfield to indicate the kind of data. Set bit 0 if the image
is a reduced-resolution version of another image. Set bit 1 if
the image is part of a multi-page image. Set bit 2 if the image
is transparency mask for another image (photometric must be
MASK, SamplesPerPixel and BitsPerSample must be 1).
software : str
Name of the software used to create the file. Must be 7-bit ASCII.
Saved with the first page only.
metadata : dict
Additional meta data to be saved along with shape information
in JSON or ImageJ formats in an ImageDescription tag.
If None, do not write a second ImageDescription tag.
Strings must be 7-bit ASCII. Saved with the first page only.
ijmetadata : dict
Additional meta data to be saved in application specific
IJMetadata and IJMetadataByteCounts tags. Refer to the
imagej_metadata_tag function for valid keys and values.
Saved with the first page only.
extratags : sequence of tuples
Additional tags as [(code, dtype, count, value, writeonce)].
code : int
The TIFF tag Id.
dtype : str
Data type of items in 'value' in Python struct format.
One of B, s, H, I, 2I, b, h, i, 2i, f, d, Q, or q.
count : int
Number of data values. Not used for string or byte string
values.
value : sequence
'Count' values compatible with 'dtype'.
Byte strings must contain count values of dtype packed as
binary data.
writeonce : bool
If True, the tag is written to the first page only.
"""
# TODO: refactor this function
fh = self._fh
byteorder = self._byteorder
if data is None:
if compress:
raise ValueError('cannot save compressed empty file')
datashape = shape
datadtype = numpy.dtype(dtype).newbyteorder(byteorder)
datadtypechar = datadtype.char
else:
data = numpy.asarray(data, byteorder+data.dtype.char, 'C')
if data.size == 0:
raise ValueError('cannot save empty array')
datashape = data.shape
datadtype = data.dtype
datadtypechar = data.dtype.char
returnoffset = returnoffset and datadtype.isnative
bilevel = datadtypechar == '?'
if bilevel:
index = -1 if datashape[-1] > 1 else -2
datasize = product(datashape[:index])
if datashape[index] % 8:
datasize *= datashape[index] // 8 + 1
else:
datasize *= datashape[index] // 8
else:
datasize = product(datashape) * datadtype.itemsize
# just append contiguous data if possible
self._truncate = bool(truncate)
if self._datashape:
if (not contiguous
or self._datashape[1:] != datashape
or self._datadtype != datadtype
or (compress and self._tags)
or tile
or not numpy.array_equal(colormap, self._colormap)):
# incompatible shape, dtype, compression mode, or colormap
self._write_remaining_pages()
self._write_image_description()
self._truncate = False
self._descriptionoffset = 0
self._descriptionlenoffset = 0
self._datashape = None
self._colormap = None
if self._imagej:
raise ValueError(
'ImageJ does not support non-contiguous data')
else:
# consecutive mode
self._datashape = (self._datashape[0] + 1,) + datashape
if not compress:
# write contiguous data, write IFDs/tags later
offset = fh.tell()
if data is None:
fh.write_empty(datasize)
else:
fh.write_array(data)
if returnoffset:
return offset, datasize
return None
input_shape = datashape
tagnoformat = self._tagnoformat
valueformat = self._valueformat
offsetformat = self._offsetformat
offsetsize = self._offsetsize
tagsize = self._tagsize
MINISBLACK = TIFF.PHOTOMETRIC.MINISBLACK
MINISWHITE = TIFF.PHOTOMETRIC.MINISWHITE
RGB = TIFF.PHOTOMETRIC.RGB
CFA = TIFF.PHOTOMETRIC.CFA
PALETTE = TIFF.PHOTOMETRIC.PALETTE
CONTIG = TIFF.PLANARCONFIG.CONTIG
SEPARATE = TIFF.PLANARCONFIG.SEPARATE
# parse input
if photometric is not None:
photometric = enumarg(TIFF.PHOTOMETRIC, photometric)
if planarconfig:
planarconfig = enumarg(TIFF.PLANARCONFIG, planarconfig)
if extrasamples is None:
extrasamples_ = None
else:
extrasamples_ = tuple(enumarg(TIFF.EXTRASAMPLE, es)
for es in sequence(extrasamples))
if not compress:
compress = False
compresstag = 1
# TODO: support predictors without compression
predictor = False
predictortag = 1
else:
if isinstance(compress, (tuple, list)):
compress, compresslevel = compress
elif isinstance(compress, int):
compress, compresslevel = 'ADOBE_DEFLATE', int(compress)
if not 0 <= compresslevel <= 9:
raise ValueError('invalid compression level %s' % compress)
else:
compresslevel = None
compress = compress.upper()
compresstag = enumarg(TIFF.COMPRESSION, compress)
if predictor:
if datadtype.kind in 'iu':
predictortag = 2
predictor = TIFF.PREDICTORS[2]
elif datadtype.kind == 'f':
predictortag = 3
predictor = TIFF.PREDICTORS[3]
else:
raise ValueError('cannot apply predictor to %s' % datadtype)
# prepare ImageJ format
if self._imagej:
# if predictor or compress:
# warnings.warn(
# 'ImageJ cannot handle predictors or compression')
if description:
warnings.warn('not writing description to ImageJ file')
description = None
volume = False
if datadtypechar not in 'BHhf':
raise ValueError(
'ImageJ does not support data type %s' % datadtypechar)
ijrgb = photometric == RGB if photometric else None
if datadtypechar not in 'B':
ijrgb = False
ijshape = imagej_shape(datashape, ijrgb)
if ijshape[-1] in (3, 4):
photometric = RGB
if datadtypechar not in 'B':
raise ValueError('ImageJ does not support data type %s '
'for RGB' % datadtypechar)
elif photometric is None:
photometric = MINISBLACK
planarconfig = None
if planarconfig == SEPARATE:
raise ValueError('ImageJ does not support planar images')
else:
planarconfig = CONTIG if ijrgb else None
# define compress function
if compress:
compressor = TIFF.COMPESSORS[compresstag]
if predictor:
def compress(data, level=compresslevel):
data = predictor(data, axis=-2)
return compressor(data, level)
else:
def compress(data, level=compresslevel):
return compressor(data, level)
# verify colormap and indices
if colormap is not None:
if datadtypechar not in 'BH':
raise ValueError('invalid data dtype for palette mode')
colormap = numpy.asarray(colormap, dtype=byteorder+'H')
if colormap.shape != (3, 2**(datadtype.itemsize * 8)):
raise ValueError('invalid color map shape')
self._colormap = colormap
# verify tile shape
if tile:
tile = tuple(int(i) for i in tile[:3])
volume = len(tile) == 3
if (len(tile) < 2 or tile[-1] % 16 or tile[-2] % 16 or
any(i < 1 for i in tile)):
raise ValueError('invalid tile shape')
else:
tile = ()
volume = False
# normalize data shape to 5D or 6D, depending on volume:
# (pages, planar_samples, [depth,] height, width, contig_samples)
datashape = reshape_nd(datashape, 3 if photometric == RGB else 2)
shape = datashape
ndim = len(datashape)
samplesperpixel = 1
extrasamples = 0
if volume and ndim < 3:
volume = False
if colormap is not None:
photometric = PALETTE
planarconfig = None
if photometric is None:
photometric = MINISBLACK
if bilevel:
photometric = MINISWHITE
elif planarconfig == CONTIG:
if ndim > 2 and shape[-1] in (3, 4):
photometric = RGB
elif planarconfig == SEPARATE:
if volume and ndim > 3 and shape[-4] in (3, 4):
photometric = RGB
elif ndim > 2 and shape[-3] in (3, 4):
photometric = RGB
elif ndim > 2 and shape[-1] in (3, 4):
photometric = RGB
elif self._imagej:
photometric = MINISBLACK
elif volume and ndim > 3 and shape[-4] in (3, 4):
photometric = RGB
elif ndim > 2 and shape[-3] in (3, 4):
photometric = RGB
if planarconfig and len(shape) <= (3 if volume else 2):
planarconfig = None
if photometric not in (0, 1, 3, 4):
photometric = MINISBLACK
if photometric == RGB:
if len(shape) < 3:
raise ValueError('not a RGB(A) image')
if len(shape) < 4:
volume = False
if planarconfig is None:
if shape[-1] in (3, 4):
planarconfig = CONTIG
elif shape[-4 if volume else -3] in (3, 4):
planarconfig = SEPARATE
elif shape[-1] > shape[-4 if volume else -3]:
planarconfig = SEPARATE
else:
planarconfig = CONTIG
if planarconfig == CONTIG:
datashape = (-1, 1) + shape[(-4 if volume else -3):]
samplesperpixel = datashape[-1]
else:
datashape = (-1,) + shape[(-4 if volume else -3):] + (1,)
samplesperpixel = datashape[1]
if samplesperpixel > 3:
extrasamples = samplesperpixel - 3
elif photometric == CFA:
if len(shape) != 2:
raise ValueError('invalid CFA image')
volume = False
planarconfig = None
datashape = (-1, 1) + shape[-2:] + (1,)
if 50706 not in (et[0] for et in extratags):
raise ValueError('must specify DNG tags for CFA image')
elif planarconfig and len(shape) > (3 if volume else 2):
if planarconfig == CONTIG:
datashape = (-1, 1) + shape[(-4 if volume else -3):]
samplesperpixel = datashape[-1]
else:
datashape = (-1,) + shape[(-4 if volume else -3):] + (1,)
samplesperpixel = datashape[1]
extrasamples = samplesperpixel - 1
else:
planarconfig = None
while len(shape) > 2 and shape[-1] == 1:
shape = shape[:-1] # remove trailing 1s
if len(shape) < 3:
volume = False
if extrasamples_ is None:
datashape = (-1, 1) + shape[(-3 if volume else -2):] + (1,)
else:
datashape = (-1, 1) + shape[(-4 if volume else -3):]
samplesperpixel = datashape[-1]
extrasamples = samplesperpixel - 1
if subfiletype & 0b100:
# FILETYPE_MASK
if not (bilevel and samplesperpixel == 1 and
photometric in (0, 1, 4)):
raise ValueError('invalid SubfileType MASK')
photometric = TIFF.PHOTOMETRIC.MASK
# normalize shape to 6D
assert len(datashape) in (5, 6)
if len(datashape) == 5:
datashape = datashape[:2] + (1,) + datashape[2:]
if datashape[0] == -1:
s0 = product(input_shape) // product(datashape[1:])
datashape = (s0,) + datashape[1:]
shape = datashape
if data is not None:
data = data.reshape(shape)
if tile and not volume:
tile = (1, tile[-2], tile[-1])
if photometric == PALETTE:
if (samplesperpixel != 1 or extrasamples or
shape[1] != 1 or shape[-1] != 1):
raise ValueError('invalid data shape for palette mode')
if photometric == RGB and samplesperpixel == 2:
raise ValueError('not a RGB image (samplesperpixel=2)')
if bilevel:
if compresstag not in (1, 32773):
raise ValueError('cannot compress bilevel image')
if tile:
raise ValueError('cannot save tiled bilevel image')
if photometric not in (0, 1, 4):
raise ValueError('cannot save bilevel image as %s' %
str(photometric))
datashape = list(datashape)
if datashape[-2] % 8:
datashape[-2] = datashape[-2] // 8 + 1
else:
datashape[-2] = datashape[-2] // 8
datashape = tuple(datashape)
assert datasize == product(datashape)
if data is not None:
data = numpy.packbits(data, axis=-2)
assert datashape[-2] == data.shape[-2]
bytestr = bytes if sys.version[0] == '2' else (
lambda x: bytes(x, 'ascii') if isinstance(x, str) else x)
tags = [] # list of (code, ifdentry, ifdvalue, writeonce)
strip_or_tile = 'Tile' if tile else 'Strip'
tagbytecounts = TIFF.TAG_NAMES[strip_or_tile + 'ByteCounts']
tag_offsets = TIFF.TAG_NAMES[strip_or_tile + 'Offsets']
self._tagoffsets = tag_offsets
def pack(fmt, *val):
return struct.pack(byteorder+fmt, *val)
def addtag(code, dtype, count, value, writeonce=False):
# Compute ifdentry & ifdvalue bytes from code, dtype, count, value
# Append (code, ifdentry, ifdvalue, writeonce) to tags list
code = int(TIFF.TAG_NAMES.get(code, code))
try:
tifftype = TIFF.DATA_DTYPES[dtype]
except KeyError:
raise ValueError('unknown dtype %s' % dtype)
rawcount = count
if dtype == 's':
# strings
value = bytestr(value) + b'\0'
count = rawcount = len(value)
rawcount = value.find(b'\0\0')
if rawcount < 0:
rawcount = count
else:
rawcount += 1 # length of string without buffer
value = (value,)
elif isinstance(value, bytes):
# packed binary data
dtsize = struct.calcsize(dtype)
if len(value) % dtsize:
raise ValueError('invalid packed binary data')
count = len(value) // dtsize
if len(dtype) > 1:
count *= int(dtype[:-1])
dtype = dtype[-1]
ifdentry = [pack('HH', code, tifftype),
pack(offsetformat, rawcount)]
ifdvalue = None
if struct.calcsize(dtype) * count <= offsetsize:
# value(s) can be written directly
if isinstance(value, bytes):
ifdentry.append(pack(valueformat, value))
elif count == 1:
if isinstance(value, (tuple, list, numpy.ndarray)):
value = value[0]
ifdentry.append(pack(valueformat, pack(dtype, value)))
else:
ifdentry.append(pack(valueformat,
pack(str(count)+dtype, *value)))
else:
# use offset to value(s)
ifdentry.append(pack(offsetformat, 0))
if isinstance(value, bytes):
ifdvalue = value
elif isinstance(value, numpy.ndarray):
assert value.size == count
assert value.dtype.char == dtype
ifdvalue = value.tostring()
elif isinstance(value, (tuple, list)):
ifdvalue = pack(str(count)+dtype, *value)
else:
ifdvalue = pack(dtype, value)
tags.append((code, b''.join(ifdentry), ifdvalue, writeonce))
def rational(arg, max_denominator=1000000):
""""Return nominator and denominator from float or two integers."""
from fractions import Fraction # delayed import
try:
f = Fraction.from_float(arg)
except TypeError:
f = Fraction(arg[0], arg[1])
f = f.limit_denominator(max_denominator)
return f.numerator, f.denominator
if description:
# user provided description
addtag('ImageDescription', 's', 0, description, writeonce=True)
# write shape and metadata to ImageDescription
self._metadata = {} if not metadata else metadata.copy()
if self._imagej:
description = imagej_description(
input_shape, shape[-1] in (3, 4), self._colormap is not None,
**self._metadata)
elif metadata or metadata == {}:
if self._truncate:
self._metadata.update(truncated=True)
description = json_description(input_shape, **self._metadata)
else:
description = None
if description:
# add 64 bytes buffer
# the image description might be updated later with the final shape
description = str2bytes(description, 'ascii')
description += b'\0' * 64
self._descriptionlen = len(description)
addtag('ImageDescription', 's', 0, description, writeonce=True)
if software:
addtag('Software', 's', 0, software, writeonce=True)
if datetime is None:
datetime = self._now()
addtag('DateTime', 's', 0, datetime.strftime('%Y:%m:%d %H:%M:%S'),
writeonce=True)
addtag('Compression', 'H', 1, compresstag)
if predictor:
addtag('Predictor', 'H', 1, predictortag)
addtag('ImageWidth', 'I', 1, shape[-2])
addtag('ImageLength', 'I', 1, shape[-3])
if tile:
addtag('TileWidth', 'I', 1, tile[-1])
addtag('TileLength', 'I', 1, tile[-2])
if tile[0] > 1:
addtag('ImageDepth', 'I', 1, shape[-4])
addtag('TileDepth', 'I', 1, tile[0])
addtag('NewSubfileType', 'I', 1, subfiletype)
if not bilevel:
sampleformat = {'u': 1, 'i': 2, 'f': 3, 'c': 6}[datadtype.kind]
addtag('SampleFormat', 'H', samplesperpixel,
(sampleformat,) * samplesperpixel)
addtag('PhotometricInterpretation', 'H', 1, photometric.value)
if colormap is not None:
addtag('ColorMap', 'H', colormap.size, colormap)
addtag('SamplesPerPixel', 'H', 1, samplesperpixel)
if bilevel:
pass
elif planarconfig and samplesperpixel > 1:
addtag('PlanarConfiguration', 'H', 1, planarconfig.value)
addtag('BitsPerSample', 'H', samplesperpixel,
(datadtype.itemsize * 8,) * samplesperpixel)
else:
addtag('BitsPerSample', 'H', 1, datadtype.itemsize * 8)
if extrasamples:
if extrasamples_ is not None:
if extrasamples != len(extrasamples_):
raise ValueError('wrong number of extrasamples specified')
addtag('ExtraSamples', 'H', extrasamples, extrasamples_)
elif photometric == RGB and extrasamples == 1:
# Unassociated alpha channel
addtag('ExtraSamples', 'H', 1, 2)
else:
# Unspecified alpha channel
addtag('ExtraSamples', 'H', extrasamples, (0,) * extrasamples)
if resolution is not None:
addtag('XResolution', '2I', 1, rational(resolution[0]))
addtag('YResolution', '2I', 1, rational(resolution[1]))
if len(resolution) > 2:
unit = resolution[2]
unit = 1 if unit is None else enumarg(TIFF.RESUNIT, unit)
elif self._imagej:
unit = 1
else:
unit = 2
addtag('ResolutionUnit', 'H', 1, unit)
elif not self._imagej:
addtag('XResolution', '2I', 1, (1, 1))
addtag('YResolution', '2I', 1, (1, 1))
addtag('ResolutionUnit', 'H', 1, 1)
if ijmetadata:
for t in imagej_metadata_tag(ijmetadata, byteorder):
addtag(*t)
contiguous = not compress
if tile:
# one chunk per tile per plane
tiles = ((shape[2] + tile[0] - 1) // tile[0],
(shape[3] + tile[1] - 1) // tile[1],
(shape[4] + tile[2] - 1) // tile[2])
numtiles = product(tiles) * shape[1]
stripbytecounts = [
product(tile) * shape[-1] * datadtype.itemsize] * numtiles
addtag(tagbytecounts, offsetformat, numtiles, stripbytecounts)
addtag(tag_offsets, offsetformat, numtiles, [0] * numtiles)
contiguous = contiguous and product(tiles) == 1
if not contiguous:
# allocate tile buffer
chunk = numpy.empty(tile + (shape[-1],), dtype=datadtype)
elif contiguous:
# one strip per plane
if bilevel:
stripbytecounts = [product(datashape[2:])] * shape[1]
else:
stripbytecounts = [
product(datashape[2:]) * datadtype.itemsize] * shape[1]
addtag(tagbytecounts, offsetformat, shape[1], stripbytecounts)
addtag(tag_offsets, offsetformat, shape[1], [0] * shape[1])
addtag('RowsPerStrip', 'I', 1, shape[-3])
else:
# compress rowsperstrip or ~64 KB chunks
rowsize = product(shape[-2:]) * datadtype.itemsize
if rowsperstrip is None:
rowsperstrip = 65536 // rowsize
if rowsperstrip < 1:
rowsperstrip = 1
elif rowsperstrip > shape[-3]:
rowsperstrip = shape[-3]
addtag('RowsPerStrip', 'I', 1, rowsperstrip)
numstrips = (shape[-3] + rowsperstrip - 1) // rowsperstrip
numstrips *= shape[1]
stripbytecounts = [0] * numstrips
addtag(tagbytecounts, offsetformat, numstrips, [0] * numstrips)
addtag(tag_offsets, offsetformat, numstrips, [0] * numstrips)
if data is None and not contiguous:
raise ValueError('cannot write non-contiguous empty file')
# add extra tags from user
for t in extratags:
addtag(*t)
# TODO: check TIFFReadDirectoryCheckOrder warning in files containing
# multiple tags of same code
# the entries in an IFD must be sorted in ascending order by tag code
tags = sorted(tags, key=lambda x: x[0])
if not (self._bigtiff or self._imagej) and (
fh.tell() + datasize > 2**32-1):
raise ValueError('data too large for standard TIFF file')
# if not compressed or multi-tiled, write the first IFD and then
# all data contiguously; else, write all IFDs and data interleaved
for pageindex in range(1 if contiguous else shape[0]):
# update pointer at ifd_offset
pos = fh.tell()
if pos % 2:
# location of IFD must begin on a word boundary
fh.write(b'\0')
pos += 1
fh.seek(self._ifdoffset)
fh.write(pack(offsetformat, pos))
fh.seek(pos)
# write ifdentries
fh.write(pack(tagnoformat, len(tags)))
tag_offset = fh.tell()
fh.write(b''.join(t[1] for t in tags))
self._ifdoffset = fh.tell()
fh.write(pack(offsetformat, 0)) # offset to next IFD
# write tag values and patch offsets in ifdentries, if necessary
for tagindex, tag in enumerate(tags):
if tag[2]:
pos = fh.tell()
if pos % 2:
# tag value is expected to begin on word boundary
fh.write(b'\0')
pos += 1
fh.seek(tag_offset + tagindex*tagsize + offsetsize + 4)
fh.write(pack(offsetformat, pos))
fh.seek(pos)
if tag[0] == tag_offsets:
stripoffsetsoffset = pos
elif tag[0] == tagbytecounts:
strip_bytecounts_offset = pos
elif tag[0] == 270 and tag[2].endswith(b'\0\0\0\0'):
# image description buffer
self._descriptionoffset = pos
self._descriptionlenoffset = (
tag_offset + tagindex * tagsize + 4)
fh.write(tag[2])
# write image data
data_offset = fh.tell()
skip = align - data_offset % align
fh.seek(skip, 1)
data_offset += skip
if contiguous:
if data is None:
fh.write_empty(datasize)
else:
fh.write_array(data)
elif tile:
if data is None:
fh.write_empty(numtiles * stripbytecounts[0])
else:
stripindex = 0
for plane in data[pageindex]:
for tz in range(tiles[0]):
for ty in range(tiles[1]):
for tx in range(tiles[2]):
c0 = min(tile[0], shape[2] - tz*tile[0])
c1 = min(tile[1], shape[3] - ty*tile[1])
c2 = min(tile[2], shape[4] - tx*tile[2])
chunk[c0:, c1:, c2:] = 0
chunk[:c0, :c1, :c2] = plane[
tz*tile[0]:tz*tile[0]+c0,
ty*tile[1]:ty*tile[1]+c1,
tx*tile[2]:tx*tile[2]+c2]
if compress:
t = compress(chunk)
fh.write(t)
stripbytecounts[stripindex] = len(t)
stripindex += 1
else:
fh.write_array(chunk)
fh.flush()
elif compress:
# write one strip per rowsperstrip
assert data.shape[2] == 1 # not handling depth
numstrips = (shape[-3] + rowsperstrip - 1) // rowsperstrip
stripindex = 0
for plane in data[pageindex]:
for i in range(numstrips):
strip = plane[0, i*rowsperstrip: (i+1)*rowsperstrip]
strip = compress(strip)
fh.write(strip)
stripbytecounts[stripindex] = len(strip)
stripindex += 1
# update strip/tile offsets and bytecounts if necessary
pos = fh.tell()
for tagindex, tag in enumerate(tags):
if tag[0] == tag_offsets: # strip/tile offsets
if tag[2]:
fh.seek(stripoffsetsoffset)
strip_offset = data_offset
for size in stripbytecounts:
fh.write(pack(offsetformat, strip_offset))
strip_offset += size
else:
fh.seek(tag_offset + tagindex*tagsize + offsetsize + 4)
fh.write(pack(offsetformat, data_offset))
elif tag[0] == tagbytecounts: # strip/tile bytecounts
if compress:
if tag[2]:
fh.seek(strip_bytecounts_offset)
for size in stripbytecounts:
fh.write(pack(offsetformat, size))
else:
fh.seek(tag_offset + tagindex*tagsize +
offsetsize + 4)
fh.write(pack(offsetformat, stripbytecounts[0]))
break
fh.seek(pos)
fh.flush()
# remove tags that should be written only once
if pageindex == 0:
tags = [tag for tag in tags if not tag[-1]]
self._shape = shape
self._datashape = (1,) + input_shape
self._datadtype = datadtype
self._dataoffset = data_offset
self._databytecounts = stripbytecounts
if contiguous:
# write remaining IFDs/tags later
self._tags = tags
# return offset and size of image data
if returnoffset:
return data_offset, sum(stripbytecounts)
return None
def _write_remaining_pages(self):
"""Write outstanding IFDs and tags to file."""
if not self._tags or self._truncate:
return
fh = self._fh
fhpos = fh.tell()
if fhpos % 2:
fh.write(b'\0')
fhpos += 1
byteorder = self._byteorder
offsetformat = self._offsetformat
offsetsize = self._offsetsize
tagnoformat = self._tagnoformat
tagsize = self._tagsize
dataoffset = self._dataoffset
pagedatasize = sum(self._databytecounts)
pageno = self._shape[0] * self._datashape[0] - 1
def pack(fmt, *val):
return struct.pack(byteorder+fmt, *val)
# construct template IFD in memory
# need to patch offsets to next IFD and data before writing to disk
ifd = io.BytesIO()
ifd.write(pack(tagnoformat, len(self._tags)))
tagoffset = ifd.tell()
ifd.write(b''.join(t[1] for t in self._tags))
ifdoffset = ifd.tell()
ifd.write(pack(offsetformat, 0)) # offset to next IFD
# tag values
for tagindex, tag in enumerate(self._tags):
offset2value = tagoffset + tagindex*tagsize + offsetsize + 4
if tag[2]:
pos = ifd.tell()
if pos % 2: # tag value is expected to begin on word boundary
ifd.write(b'\0')
pos += 1
ifd.seek(offset2value)
try:
ifd.write(pack(offsetformat, pos + fhpos))
except Exception: # struct.error
if self._imagej:
warnings.warn('truncating ImageJ file')
self._truncate = True
return
raise ValueError('data too large for non-BigTIFF file')
ifd.seek(pos)
ifd.write(tag[2])
if tag[0] == self._tagoffsets:
# save strip/tile offsets for later updates
stripoffset2offset = offset2value
stripoffset2value = pos
elif tag[0] == self._tagoffsets:
# save strip/tile offsets for later updates
stripoffset2offset = None
stripoffset2value = offset2value
# size to word boundary
if ifd.tell() % 2:
ifd.write(b'\0')
# check if all IFDs fit in file
pos = fh.tell()
if not self._bigtiff and pos + ifd.tell() * pageno > 2**32 - 256:
if self._imagej:
warnings.warn('truncating ImageJ file')
self._truncate = True
return
raise ValueError('data too large for non-BigTIFF file')
# TODO: assemble IFD chain in memory
for _ in range(pageno):
# update pointer at IFD offset
pos = fh.tell()
fh.seek(self._ifdoffset)
fh.write(pack(offsetformat, pos))
fh.seek(pos)
self._ifdoffset = pos + ifdoffset
# update strip/tile offsets in IFD
dataoffset += pagedatasize # offset to image data
if stripoffset2offset is None:
ifd.seek(stripoffset2value)
ifd.write(pack(offsetformat, dataoffset))
else:
ifd.seek(stripoffset2offset)
ifd.write(pack(offsetformat, pos + stripoffset2value))
ifd.seek(stripoffset2value)
stripoffset = dataoffset
for size in self._databytecounts:
ifd.write(pack(offsetformat, stripoffset))
stripoffset += size
# write IFD entry
fh.write(ifd.getvalue())
self._tags = None
self._datadtype = None
self._dataoffset = None
self._databytecounts = None
# do not reset _shape or _data_shape
def _write_image_description(self):
"""Write meta data to ImageDescription tag."""
if (not self._datashape or self._datashape[0] == 1 or
self._descriptionoffset <= 0):
return
colormapped = self._colormap is not None
if self._imagej:
isrgb = self._shape[-1] in (3, 4)
description = imagej_description(
self._datashape, isrgb, colormapped, **self._metadata)
else:
description = json_description(self._datashape, **self._metadata)
# rewrite description and its length to file
description = description.encode('utf-8')
description = description[:self._descriptionlen-1]
pos = self._fh.tell()
self._fh.seek(self._descriptionoffset)
self._fh.write(description)
self._fh.seek(self._descriptionlenoffset)
self._fh.write(struct.pack(self._byteorder+self._offsetformat,
len(description)+1))
self._fh.seek(pos)
self._descriptionoffset = 0
self._descriptionlenoffset = 0
self._descriptionlen = 0
def _now(self):
"""Return current date and time."""
return datetime.datetime.now()
def close(self):
"""Write remaining pages and close file handle."""
if not self._truncate:
self._write_remaining_pages()
self._write_image_description()
self._fh.close()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
class TiffFile(object):
"""Read image and metadata from TIFF file.
TiffFile instances must be closed using the 'close' method, which is
automatically called when using the 'with' context manager.
Attributes
----------
pages : TiffPages
Sequence of TIFF pages in file.
series : list of TiffPageSeries
Sequences of closely related TIFF pages. These are computed
from OME, LSM, ImageJ, etc. metadata or based on similarity
of page properties such as shape, dtype, and compression.
is_flag : bool
If True, file is of a certain format.
Flags are: bigtiff, movie, shaped, ome, imagej, stk, lsm, fluoview,
nih, vista, micromanager, metaseries, mdgel, mediacy, tvips, fei,
sem, scn, svs, scanimage, andor, epics, ndpi, pilatus, qpi.
All attributes are read-only.
"""
def __init__(self, arg, name=None, offset=None, size=None,
multifile=True, movie=None, **kwargs):
"""Initialize instance from file.
Parameters
----------
arg : str or open file
Name of file or open file object.
The file objects are closed in TiffFile.close().
name : str
Optional name of file in case 'arg' is a file handle.
offset : int
Optional start position of embedded file. By default, this is
the current file position.
size : int
Optional size of embedded file. By default, this is the number
of bytes from the 'offset' to the end of the file.
multifile : bool
If True (default), series may include pages from multiple files.
Currently applies to OME-TIFF only.
movie : bool
If True, assume that later pages differ from first page only by
data offsets and byte counts. Significantly increases speed and
reduces memory usage when reading movies with thousands of pages.
Enabling this for non-movie files will result in data corruption
or crashes. Python 3 only.
kwargs : bool
'is_ome': If False, disable processing of OME-XML metadata.
"""
if 'fastij' in kwargs:
del kwargs['fastij']
raise DeprecationWarning('the fastij option will be removed')
for key, value in kwargs.items():
if key[:3] == 'is_' and key[3:] in TIFF.FILE_FLAGS:
if value is not None and not value:
setattr(self, key, bool(value))
else:
raise TypeError('unexpected keyword argument: %s' % key)
fh = FileHandle(arg, mode='rb', name=name, offset=offset, size=size)
self._fh = fh
self._multifile = bool(multifile)
self._files = {fh.name: self} # cache of TiffFiles
try:
fh.seek(0)
header = fh.read(4)
try:
byteorder = {b'II': '<', b'MM': '>'}[header[:2]]
except KeyError:
raise ValueError('not a TIFF file')
version = struct.unpack(byteorder+'H', header[2:4])[0]
if version == 43:
# BigTiff
offsetsize, zero = struct.unpack(byteorder+'HH', fh.read(4))
if zero != 0 or offsetsize != 8:
raise ValueError('invalid BigTIFF file')
if byteorder == '>':
self.tiff = TIFF.BIG_BE
else:
self.tiff = TIFF.BIG_LE
elif version == 42:
# Classic TIFF
if byteorder == '>':
self.tiff = TIFF.CLASSIC_BE
else:
self.tiff = TIFF.CLASSIC_LE
else:
raise ValueError('invalid TIFF file')
# file handle is at offset to offset to first page
self.pages = TiffPages(self)
# TODO: fix offsets in NDPI file > 4 GB
if self.is_lsm and (self.filehandle.size >= 2**32 or
self.pages[0].compression != 1 or
self.pages[1].compression != 1):
self._lsm_load_pages()
self._lsm_fix_strip_offsets()
self._lsm_fix_strip_bytecounts()
elif movie:
self.pages.useframes = True
except Exception:
fh.close()
raise
@property
def byteorder(self):
return self.tiff.byteorder
@property
def is_bigtiff(self):
return self.tiff.version == 43
@property
def filehandle(self):
"""Return file handle."""
return self._fh
@property
def filename(self):
"""Return name of file handle."""
return self._fh.name
@lazyattr
def fstat(self):
"""Return status of file handle as stat_result object."""
try:
return os.fstat(self._fh.fileno())
except Exception: # io.UnsupportedOperation
return None
def close(self):
"""Close open file handle(s)."""
for tif in self._files.values():
tif.filehandle.close()
self._files = {}
def asarray(self, key=None, series=None, out=None, validate=True,
maxworkers=None):
"""Return image data from multiple TIFF pages as numpy array.
By default, the data from the first series is returned.
Parameters
----------
key : int, slice, or sequence of page indices
Defines which pages to return as array.
series : int or TiffPageSeries
Defines which series of pages to return as array.
out : numpy.ndarray, str, or file-like object
Buffer where image data will be saved.
If None (default), a new array will be created.
If numpy.ndarray, a writable array of compatible dtype and shape.
If 'memmap', directly memory-map the image data in the TIFF file
if possible; else create a memory-mapped array in a temporary file.
If str or open file, the file name or file object used to
create a memory-map to an array stored in a binary file on disk.
validate : bool
If True (default), validate various tags.
Passed to TiffPage.asarray().
maxworkers : int or None
Maximum number of threads to concurrently get data from pages
or tiles. If None (default), mutli-threading is enabled if data
are compressed. If 0, up to half the CPU cores are used.
If 1, mutli-threading is disabled.
Reading data from file is limited to a single thread.
Using multiple threads can significantly speed up this function
if the bottleneck is decoding compressed data, e.g. in case of
large LZW compressed LSM files or JPEG compressed tiled slides.
If the bottleneck is I/O or pure Python code, using multiple
threads might be detrimental.
Returns
-------
numpy.ndarray
Image data from the specified pages.
See the TiffPage.asarray function for what kind of operations are
applied (or not) to the raw data stored in the file.
"""
if not self.pages:
return numpy.array([])
if key is None and series is None:
series = 0
if series is not None:
try:
series = self.series[series]
except (KeyError, TypeError):
pass
pages = series._pages
else:
pages = self.pages
if key is None:
pass
elif isinstance(key, inttypes):
pages = [pages[key]]
elif isinstance(key, slice):
pages = pages[key]
elif isinstance(key, collections.Iterable):
pages = [pages[k] for k in key]
else:
raise TypeError('key must be an int, slice, or sequence')
if not pages:
raise ValueError('no pages selected')
if self.is_nih:
result = stack_pages(pages, out=out, maxworkers=maxworkers,
squeeze=False)
elif key is None and series and series.offset:
typecode = self.byteorder + series.dtype.char
if pages[0].is_memmappable and (isinstance(out, str) and
out == 'memmap'):
result = self.filehandle.memmap_array(
typecode, series.shape, series.offset)
else:
if out is not None:
out = create_output(out, series.shape, series.dtype)
self.filehandle.seek(series.offset)
result = self.filehandle.read_array(
typecode, product(series.shape), out=out, native=True)
elif len(pages) == 1:
result = pages[0].asarray(out=out, validate=validate,
maxworkers=maxworkers)
else:
result = stack_pages(pages, out=out, maxworkers=maxworkers)
if result is None:
return None
if key is None:
try:
result.shape = series.shape
except ValueError:
try:
logging.warning(
'TiffFile.asarray: failed to reshape %s to %s',
result.shape, series.shape)
# try series of expected shapes
result.shape = (-1,) + series.shape
except ValueError:
# revert to generic shape
result.shape = (-1,) + pages[0].shape
elif len(pages) == 1:
result.shape = pages[0].shape
else:
result.shape = (-1,) + pages[0].shape
return result
@lazyattr
def series(self):
"""Return related pages as TiffPageSeries.
Side effect: after calling this function, TiffFile.pages might contain
TiffPage and TiffFrame instances.
"""
if not self.pages:
return []
useframes = self.pages.useframes
keyframe = self.pages.keyframe
series = []
for name in 'ome imagej lsm fluoview nih mdgel sis shaped'.split():
if getattr(self, 'is_' + name, False):
series = getattr(self, '_%s_series' % name)()
break
self.pages.useframes = useframes
self.pages.keyframe = keyframe
if not series:
series = self._generic_series()
# remove empty series, e.g. in MD Gel files
series = [s for s in series if product(s.shape) > 0]
for i, s in enumerate(series):
s.index = i
return series
def _generic_series(self):
"""Return image series in file."""
if self.pages.useframes:
# movie mode
page = self.pages[0]
shape = page.shape
axes = page.axes
if len(self.pages) > 1:
shape = (len(self.pages),) + shape
axes = 'I' + axes
return [TiffPageSeries(self.pages[:], shape, page.dtype, axes,
stype='movie')]
self.pages.clear(False)
self.pages.load()
result = []
keys = []
series = {}
compressions = TIFF.DECOMPESSORS
for page in self.pages:
if not page.shape or product(page.shape) == 0:
continue
key = page.shape + (page.axes, page.compression in compressions)
if key in series:
series[key].append(page)
else:
keys.append(key)
series[key] = [page]
for key in keys:
pages = series[key]
page = pages[0]
shape = page.shape
axes = page.axes
if len(pages) > 1:
shape = (len(pages),) + shape
axes = 'I' + axes
result.append(TiffPageSeries(pages, shape, page.dtype, axes,
stype='Generic'))
return result
def _shaped_series(self):
"""Return image series in "shaped" file."""
pages = self.pages
pages.useframes = True
lenpages = len(pages)
def append_series(series, pages, axes, shape, reshape, name,
truncated):
page = pages[0]
if not axes:
shape = page.shape
axes = page.axes
if len(pages) > 1:
shape = (len(pages),) + shape
axes = 'Q' + axes
size = product(shape)
resize = product(reshape)
if page.is_contiguous and resize > size and resize % size == 0:
if truncated is None:
truncated = True
axes = 'Q' + axes
shape = (resize // size,) + shape
try:
axes = reshape_axes(axes, shape, reshape)
shape = reshape
except ValueError as e:
logging.warning('Shaped series: %s', str(e))
series.append(
TiffPageSeries(pages, shape, page.dtype, axes, name=name,
stype='Shaped', truncated=truncated))
keyframe = axes = shape = reshape = name = None
series = []
index = 0
while True:
if index >= lenpages:
break
# new keyframe; start of new series
pages.keyframe = index
keyframe = pages[index]
if not keyframe.is_shaped:
logging.warning(
'Shaped series: invalid metadata or corrupted file')
return None
# read metadata
axes = None
shape = None
metadata = json_description_metadata(keyframe.is_shaped)
name = metadata.get('name', '')
reshape = metadata['shape']
truncated = metadata.get('truncated', None)
if 'axes' in metadata:
axes = metadata['axes']
if len(axes) == len(reshape):
shape = reshape
else:
axes = ''
logging.warning('Shaped series: axes do not match shape')
# skip pages if possible
spages = [keyframe]
size = product(reshape)
npages, mod = divmod(size, product(keyframe.shape))
if mod:
logging.warning(
'Shaped series: series shape does not match page shape')
return None
if 1 < npages <= lenpages - index:
size *= keyframe._dtype.itemsize
if truncated:
npages = 1
elif (keyframe.is_final and
keyframe.offset + size < pages[index+1].offset):
truncated = False
else:
# need to read all pages for series
truncated = False
for j in range(index+1, index+npages):
page = pages[j]
page.keyframe = keyframe
spages.append(page)
append_series(series, spages, axes, shape, reshape, name,
truncated)
index += npages
return series
def _imagej_series(self):
"""Return image series in ImageJ file."""
# ImageJ's dimension order is always TZCYXS
# TODO: fix loading of color, composite, or palette images
self.pages.useframes = True
self.pages.keyframe = 0
ij = self.imagej_metadata
pages = self.pages
page = pages[0]
def is_hyperstack():
# ImageJ hyperstack store all image metadata in the first page and
# image data are stored contiguously before the second page, if any
if not page.is_final:
return False
images = ij.get('images', 0)
if images <= 1:
return False
offset, count = page.is_contiguous
if (count != product(page.shape) * page.bitspersample // 8
or offset + count*images > self.filehandle.size):
raise ValueError()
# check that next page is stored after data
if len(pages) > 1 and offset + count*images > pages[1].offset:
return False
return True
try:
hyperstack = is_hyperstack()
except ValueError:
logging.warning(
'ImageJ series: invalid metadata or corrupted file')
return None
if hyperstack:
# no need to read other pages
pages = [page]
else:
self.pages.load()
shape = []
axes = []
if 'frames' in ij:
shape.append(ij['frames'])
axes.append('T')
if 'slices' in ij:
shape.append(ij['slices'])
axes.append('Z')
if 'channels' in ij and not (page.photometric == 2 and not
ij.get('hyperstack', False)):
shape.append(ij['channels'])
axes.append('C')
remain = ij.get('images', len(pages))//(product(shape) if shape else 1)
if remain > 1:
shape.append(remain)
axes.append('I')
if page.axes[0] == 'I':
# contiguous multiple images
shape.extend(page.shape[1:])
axes.extend(page.axes[1:])
elif page.axes[:2] == 'SI':
# color-mapped contiguous multiple images
shape = page.shape[0:1] + tuple(shape) + page.shape[2:]
axes = list(page.axes[0]) + axes + list(page.axes[2:])
else:
shape.extend(page.shape)
axes.extend(page.axes)
truncated = (
hyperstack and len(self.pages) == 1 and
page.is_contiguous[1] != product(shape) * page.bitspersample // 8)
return [TiffPageSeries(pages, shape, page.dtype, axes, stype='ImageJ',
truncated=truncated)]
def _fluoview_series(self):
"""Return image series in FluoView file."""
self.pages.useframes = True
self.pages.keyframe = 0
self.pages.load()
mm = self.fluoview_metadata
mmhd = list(reversed(mm['Dimensions']))
axes = ''.join(TIFF.MM_DIMENSIONS.get(i[0].upper(), 'Q')
for i in mmhd if i[1] > 1)
shape = tuple(int(i[1]) for i in mmhd if i[1] > 1)
return [TiffPageSeries(self.pages, shape, self.pages[0].dtype, axes,
name=mm['ImageName'], stype='FluoView')]
def _mdgel_series(self):
"""Return image series in MD Gel file."""
# only a single page, scaled according to metadata in second page
self.pages.useframes = False
self.pages.keyframe = 0
self.pages.load()
md = self.mdgel_metadata
if md['FileTag'] in (2, 128):
dtype = numpy.dtype('float32')
scale = md['ScalePixel']
scale = scale[0] / scale[1] # rational
if md['FileTag'] == 2:
# squary root data format
def transform(a):
return a.astype('float32')**2 * scale
else:
def transform(a):
return a.astype('float32') * scale
else:
transform = None
page = self.pages[0]
return [TiffPageSeries([page], page.shape, dtype, page.axes,
transform=transform, stype='MDGel')]
def _sis_series(self):
"""Return image series in Olympus SIS file."""
self.pages.useframes = True
self.pages.keyframe = 0
self.pages.load()
page0 = self.pages[0]
md = self.sis_metadata
if 'shape' in md and 'axes' in md:
shape = md['shape'] + page0.shape
axes = md['axes'] + page0.axes
elif len(self.pages) == 1:
shape = page0.shape
axes = page0.axes
else:
shape = (len(self.pages),) + page0.shape
axes = 'I' + page0.axes
return [
TiffPageSeries(self.pages, shape, page0.dtype, axes, stype='SIS')]
def _nih_series(self):
"""Return image series in NIH file."""
self.pages.useframes = True
self.pages.keyframe = 0
self.pages.load()
page0 = self.pages[0]
if len(self.pages) == 1:
shape = page0.shape
axes = page0.axes
else:
shape = (len(self.pages),) + page0.shape
axes = 'I' + page0.axes
return [
TiffPageSeries(self.pages, shape, page0.dtype, axes, stype='NIH')]
def _ome_series(self):
"""Return image series in OME-TIFF file(s)."""
from xml.etree import cElementTree as etree # delayed import
omexml = self.pages[0].description
try:
root = etree.fromstring(omexml)
except etree.ParseError as e:
# TODO: test badly encoded OME-XML
logging.warning('OME series: %s', str(e))
try:
# might work on Python 2
omexml = omexml.decode('utf-8', 'ignore').encode('utf-8')
root = etree.fromstring(omexml)
except Exception:
return None
self.pages.useframes = True
self.pages.keyframe = 0
self.pages.load()
uuid = root.attrib.get('UUID', None)
self._files = {uuid: self}
dirname = self._fh.dirname
modulo = {}
series = []
for element in root:
if element.tag.endswith('BinaryOnly'):
# TODO: load OME-XML from master or companion file
logging.warning('OME series: not an ome-tiff master file')
break
if element.tag.endswith('StructuredAnnotations'):
for annot in element:
if not annot.attrib.get('Namespace',
'').endswith('modulo'):
continue
for value in annot:
for modul in value:
for along in modul:
if not along.tag[:-1].endswith('Along'):
continue
axis = along.tag[-1]
newaxis = along.attrib.get('Type', 'other')
newaxis = TIFF.AXES_LABELS[newaxis]
if 'Start' in along.attrib:
step = float(along.attrib.get('Step', 1))
start = float(along.attrib['Start'])
stop = float(along.attrib['End']) + step
labels = numpy.arange(start, stop, step)
else:
labels = [label.text for label in along
if label.tag.endswith('Label')]
modulo[axis] = (newaxis, labels)
if not element.tag.endswith('Image'):
continue
attr = element.attrib
name = attr.get('Name', None)
for pixels in element:
if not pixels.tag.endswith('Pixels'):
continue
attr = pixels.attrib
dtype = attr.get('PixelType', None)
axes = ''.join(reversed(attr['DimensionOrder']))
shape = list(int(attr['Size'+ax]) for ax in axes)
size = product(shape[:-2])
ifds = None
spp = 1 # samples per pixel
# FIXME: this implementation assumes the last two
# dimensions are stored in tiff pages (shape[:-2]).
# Apparently that is not always the case.
for data in pixels:
if data.tag.endswith('Channel'):
attr = data.attrib
if ifds is None:
spp = int(attr.get('SamplesPerPixel', spp))
ifds = [None] * (size // spp)
elif int(attr.get('SamplesPerPixel', 1)) != spp:
raise ValueError(
"cannot handle differing SamplesPerPixel")
continue
if ifds is None:
ifds = [None] * (size // spp)
if not data.tag.endswith('TiffData'):
continue
attr = data.attrib
ifd = int(attr.get('IFD', 0))
num = int(attr.get('NumPlanes', 1 if 'IFD' in attr else 0))
num = int(attr.get('PlaneCount', num))
idx = [int(attr.get('First'+ax, 0)) for ax in axes[:-2]]
try:
idx = numpy.ravel_multi_index(idx, shape[:-2])
except ValueError:
# ImageJ produces invalid ome-xml when cropping
logging.warning('OME series: invalid TiffData index')
continue
for uuid in data:
if not uuid.tag.endswith('UUID'):
continue
if uuid.text not in self._files:
if not self._multifile:
# abort reading multifile OME series
# and fall back to generic series
return []
fname = uuid.attrib['FileName']
try:
tif = TiffFile(os.path.join(dirname, fname))
tif.pages.useframes = True
tif.pages.keyframe = 0
tif.pages.load()
except (IOError, FileNotFoundError, ValueError):
logging.warning(
"OME series: failed to read '%s'", fname)
break
self._files[uuid.text] = tif
tif.close()
pages = self._files[uuid.text].pages
try:
for i in range(num if num else len(pages)):
ifds[idx + i] = pages[ifd + i]
except IndexError:
logging.warning('OME series: index out of range')
# only process first UUID
break
else:
pages = self.pages
try:
for i in range(num if num else len(pages)):
ifds[idx + i] = pages[ifd + i]
except IndexError:
logging.warning('OME series: index out of range')
if all(i is None for i in ifds):
# skip images without data
continue
# set a keyframe on all IFDs
keyframe = None
for i in ifds:
# try find a TiffPage
if i and i == i.keyframe:
keyframe = i
break
if not keyframe:
# reload a TiffPage from file
for i, keyframe in enumerate(ifds):
if keyframe:
keyframe.parent.pages.keyframe = keyframe.index
keyframe = keyframe.parent.pages[keyframe.index]
ifds[i] = keyframe
break
for i in ifds:
if i is not None:
i.keyframe = keyframe
dtype = keyframe.dtype
series.append(
TiffPageSeries(ifds, shape, dtype, axes, parent=self,
name=name, stype='OME'))
for serie in series:
shape = list(serie.shape)
for axis, (newaxis, labels) in modulo.items():
i = serie.axes.index(axis)
size = len(labels)
if shape[i] == size:
serie.axes = serie.axes.replace(axis, newaxis, 1)
else:
shape[i] //= size
shape.insert(i+1, size)
serie.axes = serie.axes.replace(axis, axis+newaxis, 1)
serie.shape = tuple(shape)
# squeeze dimensions
for serie in series:
serie.shape, serie.axes = squeeze_axes(serie.shape, serie.axes)
return series
def _lsm_series(self):
"""Return main image series in LSM file. Skip thumbnails."""
lsmi = self.lsm_metadata
axes = TIFF.CZ_LSMINFO_SCANTYPE[lsmi['ScanType']]
if self.pages[0].photometric == 2: # RGB; more than one channel
axes = axes.replace('C', '').replace('XY', 'XYC')
if lsmi.get('DimensionP', 0) > 1:
axes += 'P'
if lsmi.get('DimensionM', 0) > 1:
axes += 'M'
axes = axes[::-1]
shape = tuple(int(lsmi[TIFF.CZ_LSMINFO_DIMENSIONS[i]]) for i in axes)
name = lsmi.get('Name', '')
self.pages.keyframe = 0
pages = self.pages[::2]
dtype = pages[0].dtype
series = [TiffPageSeries(pages, shape, dtype, axes, name=name,
stype='LSM')]
if self.pages[1].is_reduced:
self.pages.keyframe = 1
pages = self.pages[1::2]
dtype = pages[0].dtype
cp, i = 1, 0
while cp < len(pages) and i < len(shape)-2:
cp *= shape[i]
i += 1
shape = shape[:i] + pages[0].shape
axes = axes[:i] + 'CYX'
series.append(TiffPageSeries(pages, shape, dtype, axes, name=name,
stype='LSMreduced'))
return series
def _lsm_load_pages(self):
"""Load all pages from LSM file."""
self.pages.cache = True
self.pages.useframes = True
# second series: thumbnails
self.pages.keyframe = 1
keyframe = self.pages[1]
for page in self.pages[1::2]:
page.keyframe = keyframe
# first series: data
self.pages.keyframe = 0
keyframe = self.pages[0]
for page in self.pages[::2]:
page.keyframe = keyframe
def _lsm_fix_strip_offsets(self):
"""Unwrap strip offsets for LSM files greater than 4 GB.
Each series and position require separate unwrapping (undocumented).
"""
if self.filehandle.size < 2**32:
return
pages = self.pages
npages = len(pages)
series = self.series[0]
axes = series.axes
# find positions
positions = 1
for i in 0, 1:
if series.axes[i] in 'PM':
positions *= series.shape[i]
# make time axis first
if positions > 1:
ntimes = 0
for i in 1, 2:
if axes[i] == 'T':
ntimes = series.shape[i]
break
if ntimes:
div, mod = divmod(npages, 2*positions*ntimes)
assert mod == 0
shape = (positions, ntimes, div, 2)
indices = numpy.arange(product(shape)).reshape(shape)
indices = numpy.moveaxis(indices, 1, 0)
else:
indices = numpy.arange(npages).reshape(-1, 2)
# images of reduced page might be stored first
if pages[0].dataoffsets[0] > pages[1].dataoffsets[0]:
indices = indices[..., ::-1]
# unwrap offsets
wrap = 0
previousoffset = 0
for i in indices.flat:
page = pages[i]
dataoffsets = []
for currentoffset in page.dataoffsets:
if currentoffset < previousoffset:
wrap += 2**32
dataoffsets.append(currentoffset + wrap)
previousoffset = currentoffset
page.dataoffsets = tuple(dataoffsets)
def _lsm_fix_strip_bytecounts(self):
"""Set databytecounts to size of compressed data.
The StripByteCounts tag in LSM files contains the number of bytes
for the uncompressed data.
"""
pages = self.pages
if pages[0].compression == 1:
return
# sort pages by first strip offset
pages = sorted(pages, key=lambda p: p.dataoffsets[0])
npages = len(pages) - 1
for i, page in enumerate(pages):
if page.index % 2:
continue
offsets = page.dataoffsets
bytecounts = page.databytecounts
if i < npages:
lastoffset = pages[i+1].dataoffsets[0]
else:
# LZW compressed strips might be longer than uncompressed
lastoffset = min(offsets[-1] + 2*bytecounts[-1], self._fh.size)
offsets = offsets + (lastoffset,)
page.databytecounts = tuple(offsets[j+1] - offsets[j]
for j in range(len(bytecounts)))
def __getattr__(self, name):
"""Return 'is_flag' attributes from first page."""
if name[3:] in TIFF.FILE_FLAGS:
if not self.pages:
return False
value = bool(getattr(self.pages[0], name))
setattr(self, name, value)
return value
raise AttributeError("'%s' object has no attribute '%s'" %
(self.__class__.__name__, name))
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def __str__(self, detail=0, width=79):
"""Return string containing information about file.
The detail parameter specifies the level of detail returned:
0: file only.
1: all series, first page of series and its tags.
2: large tag values and file metadata.
3: all pages.
"""
info = [
"TiffFile '%s'",
format_size(self._fh.size),
'' if byteorder_isnative(self.tiff.byteorder) else {
'<': 'little-endian', '>': 'big-endian'}[self.tiff.byteorder]]
if self.is_bigtiff:
info.append('BigTiff')
info.append(' '.join(f.lower() for f in self.flags))
if len(self.pages) > 1:
info.append('%i Pages' % len(self.pages))
if len(self.series) > 1:
info.append('%i Series' % len(self.series))
if len(self._files) > 1:
info.append('%i Files' % (len(self._files)))
info = ' '.join(info)
info = info.replace(' ', ' ').replace(' ', ' ')
info = info % snipstr(self._fh.name, max(12, width+2-len(info)))
if detail <= 0:
return info
info = [info]
info.append('\n'.join(str(s) for s in self.series))
if detail >= 3:
info.extend((TiffPage.__str__(p, detail=detail, width=width)
for p in self.pages
if p is not None))
else:
info.extend((TiffPage.__str__(s.pages[0], detail=detail,
width=width)
for s in self.series
if s.pages[0] is not None))
if detail >= 2:
for name in sorted(self.flags):
if hasattr(self, name + '_metadata'):
m = getattr(self, name + '_metadata')
if m:
info.append(
'%s_METADATA\n%s' % (name.upper(),
pformat(m, width=width,
height=detail*12)))
return '\n\n'.join(info).replace('\n\n\n', '\n\n')
@lazyattr
def flags(self):
"""Return set of file flags."""
return set(name.lower() for name in sorted(TIFF.FILE_FLAGS)
if getattr(self, 'is_' + name))
@lazyattr
def is_mdgel(self):
"""File has MD Gel format."""
try:
return self.pages[0].is_mdgel or self.pages[1].is_mdgel
except IndexError:
return False
@property
def is_movie(self):
"""Return if file is a movie."""
return self.pages.useframes
@lazyattr
def shaped_metadata(self):
"""Return tifffile metadata from JSON descriptions as dicts."""
if not self.is_shaped:
return None
return tuple(json_description_metadata(s.pages[0].is_shaped)
for s in self.series if s.stype.lower() == 'shaped')
@lazyattr
def ome_metadata(self):
"""Return OME XML as dict."""
# TODO: remove this or return XML?
if not self.is_ome:
return None
return xml2dict(self.pages[0].description)['OME']
@lazyattr
def lsm_metadata(self):
"""Return LSM metadata from CZ_LSMINFO tag as dict."""
if not self.is_lsm:
return None
return self.pages[0].tags['CZ_LSMINFO'].value
@lazyattr
def stk_metadata(self):
"""Return STK metadata from UIC tags as dict."""
if not self.is_stk:
return None
page = self.pages[0]
tags = page.tags
result = {}
result['NumberPlanes'] = tags['UIC2tag'].count
if page.description:
result['PlaneDescriptions'] = page.description.split('\0')
# result['plane_descriptions'] = stk_description_metadata(
# page.image_description)
if 'UIC1tag' in tags:
result.update(tags['UIC1tag'].value)
if 'UIC3tag' in tags:
result.update(tags['UIC3tag'].value) # wavelengths
if 'UIC4tag' in tags:
result.update(tags['UIC4tag'].value) # override uic1 tags
uic2tag = tags['UIC2tag'].value
result['ZDistance'] = uic2tag['ZDistance']
result['TimeCreated'] = uic2tag['TimeCreated']
result['TimeModified'] = uic2tag['TimeModified']
try:
result['DatetimeCreated'] = numpy.array(
[julian_datetime(*dt) for dt in
zip(uic2tag['DateCreated'], uic2tag['TimeCreated'])],
dtype='datetime64[ns]')
result['DatetimeModified'] = numpy.array(
[julian_datetime(*dt) for dt in
zip(uic2tag['DateModified'], uic2tag['TimeModified'])],
dtype='datetime64[ns]')
except ValueError as e:
logging.warning('STK metadata: %s', str(e))
return result
@lazyattr
def imagej_metadata(self):
"""Return consolidated ImageJ metadata as dict."""
if not self.is_imagej:
return None
page = self.pages[0]
result = imagej_description_metadata(page.is_imagej)
if 'IJMetadata' in page.tags:
try:
result.update(page.tags['IJMetadata'].value)
except Exception:
pass
return result
@lazyattr
def fluoview_metadata(self):
"""Return consolidated FluoView metadata as dict."""
if not self.is_fluoview:
return None
result = {}
page = self.pages[0]
result.update(page.tags['MM_Header'].value)
# TODO: read stamps from all pages
result['Stamp'] = page.tags['MM_Stamp'].value
# skip parsing image description; not reliable
# try:
# t = fluoview_description_metadata(page.image_description)
# if t is not None:
# result['ImageDescription'] = t
# except Exception as e:
# logging.warning('FluoView metadata: '
# 'failed to parse image description (%s)',
# str(e))
return result
@lazyattr
def nih_metadata(self):
"""Return NIH Image metadata from NIHImageHeader tag as dict."""
if not self.is_nih:
return None
return self.pages[0].tags['NIHImageHeader'].value
@lazyattr
def fei_metadata(self):
"""Return FEI metadata from SFEG or HELIOS tags as dict."""
if not self.is_fei:
return None
tags = self.pages[0].tags
if 'FEI_SFEG' in tags:
return tags['FEI_SFEG'].value
if 'FEI_HELIOS' in tags:
return tags['FEI_HELIOS'].value
return None
@lazyattr
def sem_metadata(self):
"""Return SEM metadata from CZ_SEM tag as dict."""
if not self.is_sem:
return None
return self.pages[0].tags['CZ_SEM'].value
@lazyattr
def sis_metadata(self):
"""Return Olympus SIS metadata from SIS and INI tags as dict."""
if not self.is_sis:
return None
tags = self.pages[0].tags
result = {}
try:
result.update(tags['OlympusINI'].value)
except Exception:
pass
try:
result.update(tags['OlympusSIS'].value)
except Exception:
pass
return result
@lazyattr
def mdgel_metadata(self):
"""Return consolidated metadata from MD GEL tags as dict."""
for page in self.pages[:2]:
if 'MDFileTag' in page.tags:
tags = page.tags
break
else:
return None
result = {}
for code in range(33445, 33453):
name = TIFF.TAGS[code]
if name not in tags:
continue
result[name[2:]] = tags[name].value
return result
@lazyattr
def andor_metadata(self):
"""Return Andor tags as dict."""
return self.pages[0].andor_tags
@lazyattr
def epics_metadata(self):
"""Return EPICS areaDetector tags as dict."""
return self.pages[0].epics_tags
@lazyattr
def tvips_metadata(self):
"""Return TVIPS tag as dict."""
if not self.is_tvips:
return None
return self.pages[0].tags['TVIPS'].value
@lazyattr
def metaseries_metadata(self):
"""Return MetaSeries metadata from image description as dict."""
if not self.is_metaseries:
return None
return metaseries_description_metadata(self.pages[0].description)
@lazyattr
def pilatus_metadata(self):
"""Return Pilatus metadata from image description as dict."""
if not self.is_pilatus:
return None
return pilatus_description_metadata(self.pages[0].description)
@lazyattr
def micromanager_metadata(self):
"""Return consolidated MicroManager metadata as dict."""
if not self.is_micromanager:
return None
# from file header
result = read_micromanager_metadata(self._fh)
# from tag
result.update(self.pages[0].tags['MicroManagerMetadata'].value)
return result
@lazyattr
def scanimage_metadata(self):
"""Return ScanImage non-varying frame and ROI metadata as dict."""
if not self.is_scanimage:
return None
result = {}
try:
framedata, roidata = read_scanimage_metadata(self._fh)
result['FrameData'] = framedata
result.update(roidata)
except ValueError:
pass
# TODO: scanimage_artist_metadata
try:
result['Description'] = scanimage_description_metadata(
self.pages[0].description)
except Exception as e:
logging.warning('ScanImage metadata: %s', str(e))
return result
@property
def geotiff_metadata(self):
"""Return GeoTIFF metadata from first page as dict."""
if not self.is_geotiff:
return None
return self.pages[0].geotiff_tags
class TiffPages(object):
"""Sequence of TIFF image file directories.
"""
def __init__(self, parent):
"""Initialize instance and read first TiffPage from file.
If parent is a TiffFile, the file position must be at an offset to an
offset to a TiffPage. If parent is a TiffPage, page offsets are read
from the SubIFDs tag.
"""
self.parent = None
self.pages = [] # cache of TiffPages, TiffFrames, or their offsets
self.complete = False # True if offsets to all pages were read
self._tiffpage = TiffPage # class for reading tiff pages
self._keyframe = None
self._cache = True
self._nextpageoffset = None
if isinstance(parent, TiffFile):
# read offset to first page from current file position
self.parent = parent
fh = parent.filehandle
self._nextpageoffset = fh.tell()
offset = struct.unpack(parent.tiff.offsetformat,
fh.read(parent.tiff.offsetsize))[0]
elif 'SubIFDs' not in parent.tags:
self.complete = True
return
else:
# use offsets from SubIFDs tag
self.parent = parent.parent
fh = self.parent.filehandle
offsets = parent.tags['SubIFDs'].value
offset = offsets[0]
if offset == 0:
logging.warning('TiffPages: file contains no pages')
self.complete = True
return
if offset >= fh.size:
logging.warning('TiffPages: invalid page offset (%i)', offset)
self.complete = True
return
# read and cache first page
fh.seek(offset)
page = TiffPage(self.parent, index=0)
self.pages.append(page)
self._keyframe = page
if self._nextpageoffset is None:
self.pages.extend(offsets[1:])
self.complete = True
@property
def cache(self):
"""Return if pages/frames are currenly being cached."""
return self._cache
@cache.setter
def cache(self, value):
"""Enable or disable caching of pages/frames. Clear cache if False."""
value = bool(value)
if self._cache and not value:
self.clear()
self._cache = value
@property
def useframes(self):
"""Return if currently using TiffFrame (True) or TiffPage (False)."""
return self._tiffpage == TiffFrame and TiffFrame is not TiffPage
@useframes.setter
def useframes(self, value):
"""Set to use TiffFrame (True) or TiffPage (False)."""
self._tiffpage = TiffFrame if value else TiffPage
@property
def keyframe(self):
"""Return index of current keyframe."""
return self._keyframe.index
@keyframe.setter
def keyframe(self, index):
"""Set current keyframe. Load TiffPage from file if necessary."""
if self._keyframe.index == index:
return
if self.complete or 0 <= index < len(self.pages):
page = self.pages[index]
if isinstance(page, TiffPage):
self._keyframe = page
return
elif isinstance(page, TiffFrame):
# remove existing frame
self.pages[index] = page.offset
# load TiffPage from file
useframes = self.useframes
self._tiffpage = TiffPage
self._keyframe = self[index]
self.useframes = useframes
@property
def next_page_offset(self):
"""Return offset where offset to a new page can be stored."""
if not self.complete:
self._seek(-1)
return self._nextpageoffset
def load(self):
"""Read all remaining pages from file."""
pages = self.pages
if not pages:
return
fh = self.parent.filehandle
keyframe = self._keyframe
if not self.complete:
self._seek(-1)
for i, page in enumerate(pages):
if isinstance(page, inttypes):
fh.seek(page)
page = self._tiffpage(self.parent, index=i, keyframe=keyframe)
pages[i] = page
def clear(self, fully=True):
"""Delete all but first page from cache. Set keyframe to first page."""
pages = self.pages
if not self._cache or not pages:
return
self._keyframe = pages[0]
if fully:
# delete all but first TiffPage/TiffFrame
for i, page in enumerate(pages[1:]):
if not isinstance(page, inttypes):
pages[i+1] = page.offset
elif TiffFrame is not TiffPage:
# delete only TiffFrames
for i, page in enumerate(pages):
if isinstance(page, TiffFrame):
pages[i] = page.offset
def _seek(self, index, maxpages=2**22):
"""Seek file to offset of specified page."""
pages = self.pages
if not pages:
return
fh = self.parent.filehandle
if fh.closed:
raise RuntimeError('FileHandle is closed')
lenpages = len(pages)
if self.complete or 0 <= index < lenpages:
page = pages[index]
offset = page if isinstance(page, inttypes) else page.offset
fh.seek(offset)
return
tiff = self.parent.tiff
offsetformat = tiff.offsetformat
offsetsize = tiff.offsetsize
tagnoformat = tiff.tagnoformat
tagnosize = tiff.tagnosize
tagsize = tiff.tagsize
unpack = struct.unpack
page = pages[-1]
offset = page if isinstance(page, inttypes) else page.offset
while lenpages < maxpages:
# read offsets to pages from file until index is reached
fh.seek(offset)
# skip tags
try:
tagno = unpack(tagnoformat, fh.read(tagnosize))[0]
if tagno > 4096:
raise ValueError('suspicious number of tags: %i' % tagno)
except Exception:
logging.warning(
'TiffPages: corrupted tag list at offset %i', offset)
del pages[-1]
lenpages -= 1
self.complete = True
break
self._nextpageoffset = offset + tagnosize + tagno * tagsize
fh.seek(self._nextpageoffset)
# read offset to next page
offset = unpack(offsetformat, fh.read(offsetsize))[0]
if offset == 0:
self.complete = True
break
if offset >= fh.size:
logging.warning('TiffPages: invalid page offset (%i)', offset)
self.complete = True
break
pages.append(offset)
lenpages += 1
if 0 <= index < lenpages:
break
# detect some circular references
if lenpages == 100:
for p in pages[:-1]:
if offset == (p if isinstance(p, inttypes) else p.offset):
raise IndexError('invalid circular IFD reference')
if index >= lenpages:
raise IndexError('index out of range')
page = pages[index]
fh.seek(page if isinstance(page, inttypes) else page.offset)
def __bool__(self):
"""Return True if file contains any pages."""
return len(self.pages) > 0
def __len__(self):
"""Return number of pages in file."""
if not self.complete:
self._seek(-1)
return len(self.pages)
def __getitem__(self, key):
"""Return specified page(s) from cache or file."""
pages = self.pages
if not pages:
raise IndexError('index out of range')
if key is 0: # comparison to literal
return pages[key]
if isinstance(key, slice):
start, stop, _ = key.indices(2**31-1)
if not self.complete and max(stop, start) > len(pages):
self._seek(-1)
return [self[i] for i in range(*key.indices(len(pages)))]
if isinstance(key, collections.Iterable):
return [self[k] for k in key]
if self.complete and key >= len(pages):
raise IndexError('index out of range')
try:
page = pages[key]
except IndexError:
page = 0
if not isinstance(page, inttypes):
return page
self._seek(key)
page = self._tiffpage(self.parent, index=key, keyframe=self._keyframe)
if self._cache:
pages[key] = page
return page
def __iter__(self):
"""Return iterator over all pages."""
i = 0
while True:
try:
yield self[i]
i += 1
except IndexError:
break
class TiffPage(object):
"""TIFF image file directory (IFD).
Attributes
----------
index : int
Index of page in file.
dtype : numpy.dtype or None
Data type (native byte order) of the image in IFD.
shape : tuple
Dimensions of the image in IFD.
axes : str
Axes label codes:
'X' width, 'Y' height, 'S' sample, 'I' image series|page|plane,
'Z' depth, 'C' color|em-wavelength|channel, 'E' ex-wavelength|lambda,
'T' time, 'R' region|tile, 'A' angle, 'P' phase, 'H' lifetime,
'L' exposure, 'V' event, 'Q' unknown, '_' missing
tags : dict
Dictionary of tags in IFD. {tag.name: TiffTag}
colormap : numpy.ndarray
Color look up table, if exists.
All attributes are read-only.
Notes
-----
The internal, normalized '_shape' attribute is 6 dimensional:
0 : number planes/images (stk, ij).
1 : planar samplesperpixel.
2 : imagedepth Z (sgi).
3 : imagelength Y.
4 : imagewidth X.
5 : contig samplesperpixel.
"""
# default properties; will be updated from tags
subfiletype = 0
imagewidth = 0
imagelength = 0
imagedepth = 1
tilewidth = 0
tilelength = 0
tiledepth = 1
bitspersample = 1
samplesperpixel = 1
sampleformat = 1
rowsperstrip = 2**32-1
compression = 1
planarconfig = 1
fillorder = 1
photometric = 0
predictor = 1
extrasamples = 1
colormap = None
software = ''
description = ''
description1 = ''
def __init__(self, parent, index, keyframe=None):
"""Initialize instance from file.
The file handle position must be at offset to a valid IFD.
"""
self.parent = parent
self.index = index
self.shape = ()
self._shape = ()
self.dtype = None
self._dtype = None
self.axes = ''
self.tags = tags = {}
self.dataoffsets = ()
self.databytecounts = ()
tiff = parent.tiff
# read TIFF IFD structure and its tags from file
fh = parent.filehandle
self.offset = fh.tell() # offset to this IFD
try:
tagno = struct.unpack(
tiff.tagnoformat, fh.read(tiff.tagnosize))[0]
if tagno > 4096:
raise ValueError('suspicious number of tags')
except Exception:
raise ValueError('corrupted tag list at offset %i' % self.offset)
tagoffset = self.offset + tiff.tagnosize # fh.tell()
tagsize = tiff.tagsize
tagindex = -tagsize
data = fh.read(tagsize * tagno)
for _ in range(tagno):
tagindex += tagsize
try:
tag = TiffTag(parent, data[tagindex:tagindex+tagsize],
tagoffset+tagindex)
except TiffTag.Error as e:
logging.warning('TiffTag: %s', str(e))
continue
tagname = tag.name
if tagname not in tags:
name = tagname
tags[name] = tag
else:
# some files contain multiple tags with same code
# e.g. MicroManager files contain two ImageDescription tags
i = 1
while True:
name = '%s%i' % (tagname, i)
if name not in tags:
tags[name] = tag
break
name = TIFF.TAG_ATTRIBUTES.get(name, '')
if name:
if name[:3] in 'sof des' and not isinstance(tag.value, str):
pass # wrong string type for software, description
else:
setattr(self, name, tag.value)
if not tags:
return # found in FIBICS
if 'SubfileType' in tags and self.subfiletype == 0:
sft = tags['SubfileType'].value
if sft == 2:
self.subfiletype = 0b1 # reduced image
elif sft == 3:
self.subfiletype = 0b10 # multi-page
# consolidate private tags; remove them from self.tags
if self.is_andor:
self.andor_tags
elif self.is_epics:
self.epics_tags
# elif self.is_ndpi:
# self.ndpi_tags
if self.is_sis and 'GPSTag' in tags:
# TODO: can't change tag.name
tags['OlympusSIS2'] = tags['GPSTag']
del tags['GPSTag']
if self.is_lsm or (self.index and self.parent.is_lsm):
# correct non standard LSM bitspersample tags
tags['BitsPerSample']._fix_lsm_bitspersample(self)
if self.compression == 1 and self.predictor != 1:
# work around bug in LSM510 software
self.predictor = 1
if self.is_vista or (self.index and self.parent.is_vista):
# ISS Vista writes wrong ImageDepth tag
self.imagedepth = 1
if self.is_stk and 'UIC1tag' in tags and not tags['UIC1tag'].value:
# read UIC1tag now that plane count is known
uic1tag = tags['UIC1tag']
fh.seek(uic1tag.valueoffset)
tags['UIC1tag'].value = read_uic1tag(
fh, tiff.byteorder, uic1tag.dtype,
uic1tag.count, None, tags['UIC2tag'].count)
if 'IJMetadata' in tags:
# decode IJMetadata tag
try:
tags['IJMetadata'].value = imagej_metadata(
tags['IJMetadata'].value,
tags['IJMetadataByteCounts'].value,
tiff.byteorder)
except Exception as e:
logging.warning('TiffPage: %s', str(e))
if 'BitsPerSample' in tags:
tag = tags['BitsPerSample']
if tag.count == 1:
self.bitspersample = tag.value
else:
# LSM might list more items than samplesperpixel
value = tag.value[:self.samplesperpixel]
if any((v-value[0] for v in value)):
self.bitspersample = value
else:
self.bitspersample = value[0]
if 'SampleFormat' in tags:
tag = tags['SampleFormat']
if tag.count == 1:
self.sampleformat = tag.value
else:
value = tag.value[:self.samplesperpixel]
if any((v-value[0] for v in value)):
self.sampleformat = value
else:
self.sampleformat = value[0]
if 'TileWidth' in tags:
self.rowsperstrip = None
elif 'ImageLength' in tags:
if 'RowsPerStrip' not in tags or tags['RowsPerStrip'].count > 1:
self.rowsperstrip = self.imagelength
# self.stripsperimage = int(math.floor(
# float(self.imagelength + self.rowsperstrip - 1) /
# self.rowsperstrip))
# determine dtype
dtype = self.sampleformat, self.bitspersample
dtype = TIFF.SAMPLE_DTYPES.get(dtype, None)
if dtype is not None:
dtype = numpy.dtype(dtype)
self.dtype = self._dtype = dtype
# determine shape of data
imagelength = self.imagelength
imagewidth = self.imagewidth
imagedepth = self.imagedepth
samplesperpixel = self.samplesperpixel
if self.is_stk:
assert self.imagedepth == 1
uictag = tags['UIC2tag'].value
planes = tags['UIC2tag'].count
if self.planarconfig == 1:
self._shape = (
planes, 1, 1, imagelength, imagewidth, samplesperpixel)
if samplesperpixel == 1:
self.shape = (planes, imagelength, imagewidth)
self.axes = 'YX'
else:
self.shape = (
planes, imagelength, imagewidth, samplesperpixel)
self.axes = 'YXS'
else:
self._shape = (
planes, samplesperpixel, 1, imagelength, imagewidth, 1)
if samplesperpixel == 1:
self.shape = (planes, imagelength, imagewidth)
self.axes = 'YX'
else:
self.shape = (
planes, samplesperpixel, imagelength, imagewidth)
self.axes = 'SYX'
# detect type of series
if planes == 1:
self.shape = self.shape[1:]
elif numpy.all(uictag['ZDistance'] != 0):
self.axes = 'Z' + self.axes
elif numpy.all(numpy.diff(uictag['TimeCreated']) != 0):
self.axes = 'T' + self.axes
else:
self.axes = 'I' + self.axes
elif self.photometric == 2 or samplesperpixel > 1: # PHOTOMETRIC.RGB
if self.planarconfig == 1:
self._shape = (
1, 1, imagedepth, imagelength, imagewidth, samplesperpixel)
if imagedepth == 1:
self.shape = (imagelength, imagewidth, samplesperpixel)
self.axes = 'YXS'
else:
self.shape = (
imagedepth, imagelength, imagewidth, samplesperpixel)
self.axes = 'ZYXS'
else:
self._shape = (1, samplesperpixel, imagedepth,
imagelength, imagewidth, 1)
if imagedepth == 1:
self.shape = (samplesperpixel, imagelength, imagewidth)
self.axes = 'SYX'
else:
self.shape = (
samplesperpixel, imagedepth, imagelength, imagewidth)
self.axes = 'SZYX'
else:
self._shape = (1, 1, imagedepth, imagelength, imagewidth, 1)
if imagedepth == 1:
self.shape = (imagelength, imagewidth)
self.axes = 'YX'
else:
self.shape = (imagedepth, imagelength, imagewidth)
self.axes = 'ZYX'
# dataoffsets and databytecounts
if 'TileOffsets' in tags:
self.dataoffsets = tags['TileOffsets'].value
elif 'StripOffsets' in tags:
self.dataoffsets = tags['StripOffsets'].value
else:
self.dataoffsets = (0,)
if 'TileByteCounts' in tags:
self.databytecounts = tags['TileByteCounts'].value
elif 'StripByteCounts' in tags:
self.databytecounts = tags['StripByteCounts'].value
else:
self.databytecounts = (
product(self.shape) * (self.bitspersample // 8),)
if self.compression != 1:
logging.warning('TiffPage: ByteCounts tag is missing')
# assert len(self.shape) == len(self.axes)
def asarray(self, out=None, squeeze=True, lock=None, reopen=True,
maxsize=2**44, maxworkers=None, validate=True):
"""Read image data from file and return as numpy array.
Raise ValueError if format is unsupported.
Parameters
----------
out : numpy.ndarray, str, or file-like object
Buffer where image data will be saved.
If None (default), a new array will be created.
If numpy.ndarray, a writable array of compatible dtype and shape.
If 'memmap', directly memory-map the image data in the TIFF file
if possible; else create a memory-mapped array in a temporary file.
If str or open file, the file name or file object used to
create a memory-map to an array stored in a binary file on disk.
squeeze : bool
If True, all length-1 dimensions (except X and Y) are
squeezed out from the array.
If False, the shape of the returned array might be different from
the page.shape.
lock : {RLock, NullContext}
A reentrant lock used to syncronize reads from file.
If None (default), the lock of the parent's filehandle is used.
reopen : bool
If True (default) and the parent file handle is closed, the file
is temporarily re-opened and closed if no exception occurs.
maxsize: int or None
Maximum size of data before a ValueError is raised.
Can be used to catch DOS. Default: 16 TB.
maxworkers : int or None
Maximum number of threads to concurrently decode tile data.
If None (default), up to half the CPU cores are used for
compressed tiles.
See remarks in TiffFile.asarray.
validate : bool
If True (default), validate various parameters.
If None, only validate parameters and return None.
Returns
-------
numpy.ndarray
Numpy array of decompressed, depredicted, and unpacked image data
read from Strip/Tile Offsets/ByteCounts, formatted according to
shape and dtype metadata found in tags and parameters.
Photometric conversion, pre-multiplied alpha, orientation, and
colorimetry corrections are not applied. Specifically, CMYK images
are not converted to RGB, MinIsWhite images are not inverted,
and color palettes are not applied.
"""
# properties from TiffPage or TiffFrame
fh = self.parent.filehandle
byteorder = self.parent.tiff.byteorder
offsets, bytecounts = self.offsets_bytecounts
self = self.keyframe # self or keyframe
if not self._shape or product(self._shape) == 0:
return None
tags = self.tags
if validate or validate is None:
if maxsize and product(self._shape) > maxsize:
raise ValueError('data are too large %s' % str(self._shape))
if self.dtype is None:
raise ValueError('data type not supported: %s%i' % (
self.sampleformat, self.bitspersample))
if self.compression not in TIFF.DECOMPESSORS:
raise ValueError(
'cannot decompress %s' % self.compression.name)
if 'SampleFormat' in tags:
tag = tags['SampleFormat']
if tag.count != 1 and any((i-tag.value[0] for i in tag.value)):
raise ValueError(
'sample formats do not match %s' % tag.value)
if self.is_chroma_subsampled and (self.compression != 7 or
self.planarconfig == 2):
raise NotImplementedError('chroma subsampling not supported')
if validate is None:
return None
lock = fh.lock if lock is None else lock
with lock:
closed = fh.closed
if closed:
if reopen:
fh.open()
else:
raise IOError('file handle is closed')
dtype = self._dtype
shape = self._shape
imagewidth = self.imagewidth
imagelength = self.imagelength
imagedepth = self.imagedepth
bitspersample = self.bitspersample
typecode = byteorder + dtype.char
lsb2msb = self.fillorder == 2
istiled = self.is_tiled
if istiled:
tilewidth = self.tilewidth
tilelength = self.tilelength
tiledepth = self.tiledepth
tw = (imagewidth + tilewidth - 1) // tilewidth
tl = (imagelength + tilelength - 1) // tilelength
td = (imagedepth + tiledepth - 1) // tiledepth
shape = (shape[0], shape[1],
td*tiledepth, tl*tilelength, tw*tilewidth, shape[-1])
tileshape = (tiledepth, tilelength, tilewidth, shape[-1])
tiledshape = (td, tl, tw)
tilesize = product(tileshape)
runlen = tilewidth
else:
runlen = imagewidth
if self.planarconfig == 1:
runlen *= self.samplesperpixel
if isinstance(out, str) and out == 'memmap' and self.is_memmappable:
with lock:
result = fh.memmap_array(typecode, shape, offset=offsets[0])
elif self.is_contiguous:
if out is not None:
out = create_output(out, shape, dtype)
with lock:
fh.seek(offsets[0])
result = fh.read_array(typecode, product(shape), out=out)
if out is None and not result.dtype.isnative:
# swap byte order and dtype without copy
result.byteswap(True)
result = result.newbyteorder()
if lsb2msb:
bitorder_decode(result, out=result)
else:
result = create_output(out, shape, dtype)
decompress = TIFF.DECOMPESSORS[self.compression]
if self.compression == 7: # COMPRESSION.JPEG
outcolorspace = None
if lsb2msb:
logging.warning(
'TiffPage.asarray: disabling LSB2MSB for JPEG')
lsb2msb = False
if 'JPEGTables' in tags:
table = tags['JPEGTables'].value
else:
table = None
if 'ExtraSamples' in tags:
colorspace = None
else:
colorspace = TIFF.PHOTOMETRIC(self.photometric).name
if colorspace == 'YCBCR':
outcolorspace = 'RGB'
def decompress(data, bitspersample=bitspersample, table=table,
colorspace=colorspace,
outcolorspace=outcolorspace, out=None,
_decompress=decompress):
return _decompress(data, bitspersample, table,
colorspace, outcolorspace, out)
def unpack(data):
return data.reshape(-1)
elif bitspersample in (8, 16, 32, 64, 128):
if (bitspersample * runlen) % 8:
raise ValueError('data and sample size mismatch')
if self.predictor == 3: # PREDICTOR.FLOATINGPOINT
# the floating point horizontal differencing decoder
# needs the raw byte order
typecode = dtype.char
def unpack(data, typecode=typecode, out=None):
try:
# read only numpy array
return numpy.frombuffer(data, typecode)
except ValueError:
# strips may be missing EOI
# logging.warning('TiffPage.asarray: ...')
xlen = ((len(data) // (bitspersample // 8)) *
(bitspersample // 8))
return numpy.frombuffer(data[:xlen], typecode)
elif isinstance(bitspersample, tuple):
def unpack(data, out=None):
return unpack_rgb(data, typecode, bitspersample)
else:
def unpack(data, out=None):
return packints_decode(data, typecode, bitspersample,
runlen)
if istiled:
unpredict = TIFF.UNPREDICTORS[self.predictor]
def tile_decode(tile, tileindex,
tileshape=tileshape, tiledshape=tiledshape,
lsb2msb=lsb2msb, decompress=decompress,
unpack=unpack, unpredict=unpredict,
out=result[0]):
tw = tileindex % tiledshape[2] * tileshape[2]
tl = ((tileindex // tiledshape[2])
% tiledshape[1] * tileshape[1])
td = ((tileindex // (tiledshape[2] * tiledshape[1]))
% tiledshape[0] * tileshape[0])
pl = (tileindex // (tiledshape[2] * tiledshape[1]
* tiledshape[0]))
if tile:
if lsb2msb:
tile = bitorder_decode(tile, out=tile)
tile = decompress(tile)
tile = unpack(tile)
tile = tile[:tilesize]
try:
tile.shape = tileshape
except ValueError:
t = numpy.zeros(tileshape, tile.dtype)
try:
s = (min(imagedepth - td, tileshape[0]),
min(imagelength - tl, tileshape[1]),
min(imagewidth - tw, tileshape[2]),
tileshape[3])
tile.shape = s
t[:s[0], :s[1], :s[2]] = tile
except Exception:
# incomplete tiles; see gdal issue #1179
logging.warning('TiffPage.asarray: '
'invalid tile data %s %s',
tile.shape, tileshape)
t = t.reshape(-1)
s = min(tile.size, t.size)
t[:s] = tile[:s]
tile = t.reshape(tileshape)
tile = unpredict(tile, axis=-2, out=tile)
else:
tile = 0
out[pl,
td:td+tileshape[0],
tl:tl+tileshape[1],
tw:tw+tileshape[2]] = tile
tile_iter = buffered_read(fh, lock, offsets, bytecounts)
if maxworkers is None:
maxworkers = 0 if self.compression > 1 else 1
if maxworkers == 0:
import multiprocessing # noqa: delay import
maxworkers = multiprocessing.cpu_count() // 2
if maxworkers < 2:
for i, tile in enumerate(tile_iter):
tile_decode(tile, i)
else:
# decode first tile un-threaded to catch exceptions
tile_decode(next(tile_iter), 0)
with concurrent.futures.ThreadPoolExecutor(maxworkers
) as executor:
executor.map(tile_decode, tile_iter,
range(1, len(offsets)))
result = result[..., :imagedepth, :imagelength, :imagewidth, :]
else:
strip_size = self.rowsperstrip * self.imagewidth
if self.planarconfig == 1:
strip_size *= self.samplesperpixel
result = result.reshape(-1)
index = 0
for strip in buffered_read(fh, lock, offsets, bytecounts):
if lsb2msb:
strip = bitorder_decode(strip, out=strip)
strip = decompress(strip)
strip = unpack(strip)
size = min(result.size, strip.size, strip_size,
result.size - index)
result[index:index+size] = strip[:size]
del strip
index += size
result.shape = self._shape
if self.predictor != 1 and not (istiled and not self.is_contiguous):
unpredict = TIFF.UNPREDICTORS[self.predictor]
result = unpredict(result, axis=-2, out=result)
if squeeze:
try:
result.shape = self.shape
except ValueError:
logging.warning(
'TiffPage.asarray: failed to reshape from %s to %s',
result.shape, self.shape)
if closed:
# TODO: file should remain open if an exception occurred above
fh.close()
return result
def asrgb(self, uint8=False, alpha=None, colormap=None,
dmin=None, dmax=None, **kwargs):
"""Return image data as RGB(A).
Work in progress.
"""
data = self.asarray(**kwargs)
self = self.keyframe # self or keyframe
photometric = self.photometric
PHOTOMETRIC = TIFF.PHOTOMETRIC
if photometric == PHOTOMETRIC.PALETTE:
colormap = self.colormap
if (colormap.shape[1] < 2**self.bitspersample or
self.dtype.char not in 'BH'):
raise ValueError('cannot apply colormap')
if uint8:
if colormap.max() > 255:
colormap >>= 8
colormap = colormap.astype('uint8')
if 'S' in self.axes:
data = data[..., 0] if self.planarconfig == 1 else data[0]
data = apply_colormap(data, colormap)
elif photometric == PHOTOMETRIC.RGB:
if 'ExtraSamples' in self.tags:
if alpha is None:
alpha = TIFF.EXTRASAMPLE
extrasamples = self.extrasamples
if self.tags['ExtraSamples'].count == 1:
extrasamples = (extrasamples,)
for i, exs in enumerate(extrasamples):
if exs in alpha:
if self.planarconfig == 1:
data = data[..., [0, 1, 2, 3+i]]
else:
data = data[:, [0, 1, 2, 3+i]]
break
else:
if self.planarconfig == 1:
data = data[..., :3]
else:
data = data[:, :3]
# TODO: convert to uint8?
elif photometric == PHOTOMETRIC.MINISBLACK:
raise NotImplementedError()
elif photometric == PHOTOMETRIC.MINISWHITE:
raise NotImplementedError()
elif photometric == PHOTOMETRIC.SEPARATED:
raise NotImplementedError()
else:
raise NotImplementedError()
return data
def aspage(self):
"""Return self."""
return self
@property
def keyframe(self):
"""Return keyframe, self."""
return self
@keyframe.setter
def keyframe(self, index):
"""Set keyframe, NOP."""
return
@lazyattr
def pages(self):
"""Return sequence of sub-pages (SubIFDs)."""
if 'SubIFDs' not in self.tags:
return tuple()
return TiffPages(self)
@lazyattr
def offsets_bytecounts(self):
"""Return simplified offsets and bytecounts."""
if self.is_contiguous:
offset, byte_count = self.is_contiguous
return [offset], [byte_count]
if self.is_tiled:
return self.dataoffsets, self.databytecounts
return clean_offsets_counts(self.dataoffsets, self.databytecounts)
@lazyattr
def is_contiguous(self):
"""Return offset and size of contiguous data, else None.
Excludes prediction and fill_order.
"""
if (self.compression != 1
or self.bitspersample not in (8, 16, 32, 64)):
return None
if 'TileWidth' in self.tags:
if (self.imagewidth != self.tilewidth or
self.imagelength % self.tilelength or
self.tilewidth % 16 or self.tilelength % 16):
return None
if ('ImageDepth' in self.tags and 'TileDepth' in self.tags and
(self.imagelength != self.tilelength or
self.imagedepth % self.tiledepth)):
return None
offsets = self.dataoffsets
bytecounts = self.databytecounts
if len(offsets) == 1:
return offsets[0], bytecounts[0]
if self.is_stk or all((offsets[i] + bytecounts[i] == offsets[i+1] or
bytecounts[i+1] == 0) # no data/ignore offset
for i in range(len(offsets)-1)):
return offsets[0], sum(bytecounts)
return None
@lazyattr
def is_final(self):
"""Return if page's image data are stored in final form.
Excludes byte-swapping.
"""
return (self.is_contiguous and self.fillorder == 1 and
self.predictor == 1 and not self.is_chroma_subsampled)
@lazyattr
def is_memmappable(self):
"""Return if page's image data in file can be memory-mapped."""
return (self.parent.filehandle.is_file and self.is_final and
# (self.bitspersample == 8 or self.parent.isnative) and
self.is_contiguous[0] % self.dtype.itemsize == 0) # aligned?
def __str__(self, detail=0, width=79):
"""Return string containing information about page."""
if self.keyframe != self:
return TiffFrame.__str__(self, detail)
attr = ''
for name in ('memmappable', 'final', 'contiguous'):
attr = getattr(self, 'is_'+name)
if attr:
attr = name.upper()
break
info = ' '.join(s.lower() for s in (
'x'.join(str(i) for i in self.shape),
'%s%s' % (TIFF.SAMPLEFORMAT(self.sampleformat).name,
self.bitspersample),
' '.join(i for i in (
TIFF.PHOTOMETRIC(self.photometric).name,
'REDUCED' if self.is_reduced else '',
'MASK' if self.is_mask else '',
'TILED' if self.is_tiled else '',
self.compression.name if self.compression != 1 else '',
self.planarconfig.name if self.planarconfig != 1 else '',
self.predictor.name if self.predictor != 1 else '',
self.fillorder.name if self.fillorder != 1 else '',
) + tuple(f.upper() for f in self.flags) + (attr,)
if i)
) if s)
info = 'TiffPage %i @%i %s' % (self.index, self.offset, info)
if detail <= 0:
return info
info = [info]
tags = self.tags
tlines = []
vlines = []
for tag in sorted(tags.values(), key=lambda x: x.code):
value = tag.__str__(width=width+1)
tlines.append(value[:width].strip())
if detail > 1 and len(value) > width:
name = tag.name.upper()
if detail <= 2 and ('COUNTS' in name or 'OFFSETS' in name):
value = pformat(tag.value, width=width, height=detail*4)
else:
value = pformat(tag.value, width=width, height=detail*12)
vlines.append('%s\n%s' % (tag.name, value))
info.append('\n'.join(tlines))
if detail > 1:
info.append('\n\n'.join(vlines))
for name in ('ndpi',):
name = name + '_tags'
attr = getattr(self, name, False)
if attr:
info.append('%s\n%s' % (name.upper(), pformat(attr)))
if detail > 3:
try:
info.append('DATA\n%s' % pformat(
self.asarray(), width=width, height=detail*8))
except Exception:
pass
return '\n\n'.join(info)
@lazyattr
def flags(self):
"""Return set of flags."""
return set((name.lower() for name in sorted(TIFF.FILE_FLAGS)
if getattr(self, 'is_' + name)))
@property
def ndim(self):
"""Return number of array dimensions."""
return len(self.shape)
@property
def size(self):
"""Return number of elements in array."""
return product(self.shape)
@lazyattr
def andor_tags(self):
"""Return consolidated metadata from Andor tags as dict.
Remove Andor tags from self.tags.
"""
if not self.is_andor:
return None
tags = self.tags
result = {'Id': tags['AndorId'].value}
for tag in list(self.tags.values()):
code = tag.code
if not 4864 < code < 5031:
continue
value = tag.value
name = tag.name[5:] if len(tag.name) > 5 else tag.name
result[name] = value
del tags[tag.name]
return result
@lazyattr
def epics_tags(self):
"""Return consolidated metadata from EPICS areaDetector tags as dict.
Remove areaDetector tags from self.tags.
"""
if not self.is_epics:
return None
result = {}
tags = self.tags
for tag in list(self.tags.values()):
code = tag.code
if not 65000 <= code < 65500:
continue
value = tag.value
if code == 65000:
result['timeStamp'] = datetime.datetime.fromtimestamp(
float(value))
elif code == 65001:
result['uniqueID'] = int(value)
elif code == 65002:
result['epicsTSSec'] = int(value)
elif code == 65003:
result['epicsTSNsec'] = int(value)
else:
key, value = value.split(':', 1)
result[key] = astype(value)
del tags[tag.name]
return result
@lazyattr
def ndpi_tags(self):
"""Return consolidated metadata from Hamamatsu NDPI as dict."""
if not self.is_ndpi:
return None
tags = self.tags
result = {}
for name in ('Make', 'Model', 'Software'):
result[name] = tags[name].value
for code, name in TIFF.NDPI_TAGS.items():
code = str(code)
if code in tags:
result[name] = tags[code].value
# del tags[code]
return result
@lazyattr
def geotiff_tags(self):
"""Return consolidated metadata from GeoTIFF tags as dict."""
if not self.is_geotiff:
return None
tags = self.tags
gkd = tags['GeoKeyDirectoryTag'].value
if gkd[0] != 1:
logging.warning('GeoTIFF tags: invalid GeoKeyDirectoryTag')
return {}
result = {
'KeyDirectoryVersion': gkd[0],
'KeyRevision': gkd[1],
'KeyRevisionMinor': gkd[2],
# 'NumberOfKeys': gkd[3],
}
# deltags = ['GeoKeyDirectoryTag']
geokeys = TIFF.GEO_KEYS
geocodes = TIFF.GEO_CODES
for index in range(gkd[3]):
keyid, tagid, count, offset = gkd[4 + index * 4: index * 4 + 8]
keyid = geokeys.get(keyid, keyid)
if tagid == 0:
value = offset
else:
tagname = TIFF.TAGS[tagid]
# deltags.append(tagname)
value = tags[tagname].value[offset: offset + count]
if tagid == 34737 and count > 1 and value[-1] == '|':
value = value[:-1]
value = value if count > 1 else value[0]
if keyid in geocodes:
try:
value = geocodes[keyid](value)
except Exception:
pass
result[keyid] = value
if 'IntergraphMatrixTag' in tags:
value = tags['IntergraphMatrixTag'].value
value = numpy.array(value)
if len(value) == 16:
value = value.reshape((4, 4)).tolist()
result['IntergraphMatrix'] = value
if 'ModelPixelScaleTag' in tags:
value = numpy.array(tags['ModelPixelScaleTag'].value).tolist()
result['ModelPixelScale'] = value
if 'ModelTiepointTag' in tags:
value = tags['ModelTiepointTag'].value
value = numpy.array(value).reshape((-1, 6)).squeeze().tolist()
result['ModelTiepoint'] = value
if 'ModelTransformationTag' in tags:
value = tags['ModelTransformationTag'].value
value = numpy.array(value).reshape((4, 4)).tolist()
result['ModelTransformation'] = value
# if 'ModelPixelScaleTag' in tags and 'ModelTiepointTag' in tags:
# sx, sy, sz = tags['ModelPixelScaleTag'].value
# tiepoints = tags['ModelTiepointTag'].value
# transforms = []
# for tp in range(0, len(tiepoints), 6):
# i, j, k, x, y, z = tiepoints[tp:tp+6]
# transforms.append([
# [sx, 0.0, 0.0, x - i * sx],
# [0.0, -sy, 0.0, y + j * sy],
# [0.0, 0.0, sz, z - k * sz],
# [0.0, 0.0, 0.0, 1.0]])
# if len(tiepoints) == 6:
# transforms = transforms[0]
# result['ModelTransformation'] = transforms
if 'RPCCoefficientTag' in tags:
rpcc = tags['RPCCoefficientTag'].value
result['RPCCoefficient'] = {
'ERR_BIAS': rpcc[0],
'ERR_RAND': rpcc[1],
'LINE_OFF': rpcc[2],
'SAMP_OFF': rpcc[3],
'LAT_OFF': rpcc[4],
'LONG_OFF': rpcc[5],
'HEIGHT_OFF': rpcc[6],
'LINE_SCALE': rpcc[7],
'SAMP_SCALE': rpcc[8],
'LAT_SCALE': rpcc[9],
'LONG_SCALE': rpcc[10],
'HEIGHT_SCALE': rpcc[11],
'LINE_NUM_COEFF': rpcc[12:33],
'LINE_DEN_COEFF ': rpcc[33:53],
'SAMP_NUM_COEFF': rpcc[53:73],
'SAMP_DEN_COEFF': rpcc[73:]}
return result
@property
def is_reduced(self):
"""Page is reduced image of another image."""
return self.subfiletype & 0b1
@property
def is_multipage(self):
"""Page is part of multi-page image."""
return self.subfiletype & 0b10
@property
def is_mask(self):
"""Page is transparency mask for another image."""
return self.subfiletype & 0b100
@property
def is_mrc(self):
"""Page is part of Mixed Raster Content."""
return self.subfiletype & 0b1000
@property
def is_tiled(self):
"""Page contains tiled image."""
return 'TileWidth' in self.tags
@property
def is_chroma_subsampled(self):
"""Page contains chroma subsampled image."""
return ('YCbCrSubSampling' in self.tags and
self.tags['YCbCrSubSampling'].value != (1, 1))
@lazyattr
def is_imagej(self):
"""Return ImageJ description if exists, else None."""
for description in (self.description, self.description1):
if not description:
return None
if description[:7] == 'ImageJ=':
return description
return None
@lazyattr
def is_shaped(self):
"""Return description containing array shape if exists, else None."""
for description in (self.description, self.description1):
if not description:
return None
if description[:1] == '{' and '"shape":' in description:
return description
if description[:6] == 'shape=':
return description
return None
@property
def is_mdgel(self):
"""Page contains MDFileTag tag."""
return 'MDFileTag' in self.tags
@property
def is_mediacy(self):
"""Page contains Media Cybernetics Id tag."""
return ('MC_Id' in self.tags and
self.tags['MC_Id'].value[:7] == b'MC TIFF')
@property
def is_stk(self):
"""Page contains UIC2Tag tag."""
return 'UIC2tag' in self.tags
@property
def is_lsm(self):
"""Page contains CZ_LSMINFO tag."""
return 'CZ_LSMINFO' in self.tags
@property
def is_fluoview(self):
"""Page contains FluoView MM_STAMP tag."""
return 'MM_Stamp' in self.tags
@property
def is_nih(self):
"""Page contains NIH image header."""
return 'NIHImageHeader' in self.tags
@property
def is_sgi(self):
"""Page contains SGI image and tile depth tags."""
return 'ImageDepth' in self.tags and 'TileDepth' in self.tags
@property
def is_vista(self):
"""Software tag is 'ISS Vista'."""
return self.software == 'ISS Vista'
@property
def is_metaseries(self):
"""Page contains MDS MetaSeries metadata in ImageDescription tag."""
if self.index > 1 or self.software != 'MetaSeries':
return False
d = self.description
return d.startswith('') and d.endswith('')
@property
def is_ome(self):
"""Page contains OME-XML in ImageDescription tag."""
if self.index > 1 or not self.description:
return False
d = self.description
return d[:14] == ''
@property
def is_scn(self):
"""Page contains Leica SCN XML in ImageDescription tag."""
if self.index > 1 or not self.description:
return False
d = self.description
return d[:14] == ''
@property
def is_micromanager(self):
"""Page contains Micro-Manager metadata."""
return 'MicroManagerMetadata' in self.tags
@property
def is_andor(self):
"""Page contains Andor Technology tags."""
return 'AndorId' in self.tags
@property
def is_pilatus(self):
"""Page contains Pilatus tags."""
return (self.software[:8] == 'TVX TIFF' and
self.description[:2] == '# ')
@property
def is_epics(self):
"""Page contains EPICS areaDetector tags."""
return (self.description == 'EPICS areaDetector' or
self.software == 'EPICS areaDetector')
@property
def is_tvips(self):
"""Page contains TVIPS metadata."""
return 'TVIPS' in self.tags
@property
def is_fei(self):
"""Page contains SFEG or HELIOS metadata."""
return 'FEI_SFEG' in self.tags or 'FEI_HELIOS' in self.tags
@property
def is_sem(self):
"""Page contains Zeiss SEM metadata."""
return 'CZ_SEM' in self.tags
@property
def is_svs(self):
"""Page contains Aperio metadata."""
return self.description[:20] == 'Aperio Image Library'
@property
def is_scanimage(self):
"""Page contains ScanImage metadata."""
return (self.description[:12] == 'state.config' or
self.software[:22] == 'SI.LINE_FORMAT_VERSION' or
'scanimage.SI.' in self.description[-256:])
@property
def is_qpi(self):
"""Page contains PerkinElmer tissue images metadata."""
# The ImageDescription tag contains XML with a top-level
# element
return self.software[:15] == 'PerkinElmer-QPI'
@property
def is_geotiff(self):
"""Page contains GeoTIFF metadata."""
return 'GeoKeyDirectoryTag' in self.tags
@property
def is_sis(self):
"""Page contains Olympus SIS metadata."""
return 'OlympusSIS' in self.tags or 'OlympusINI' in self.tags
@lazyattr # must not be property; tag 65420 is later removed
def is_ndpi(self):
"""Page contains NDPI metadata."""
return '65420' in self.tags and 'Make' in self.tags
class TiffFrame(object):
"""Lightweight TIFF image file directory (IFD).
Only a limited number of tag values are read from file, e.g. StripOffsets,
and StripByteCounts. Other tag values are assumed to be identical with a
specified TiffPage instance, the keyframe.
TiffFrame is intended to reduce resource usage and speed up reading image
data from file, not for introspection of metadata.
Not compatible with Python 2.
"""
__slots__ = ('index', 'keyframe', 'parent', 'offset', 'dataoffsets',
'databytecounts')
is_mdgel = False
pages = None
tags = {}
def __init__(self, parent, index, keyframe):
"""Read specified tags from file.
The file handle position must be at the offset to a valid IFD.
"""
fh = parent.filehandle
self.parent = parent
self.index = index
self.keyframe = keyframe
self.dataoffsets = None
self.databytecounts = None
self.offset = fh.tell()
unpack = struct.unpack
tiff = parent.tiff
try:
tagno = unpack(tiff.tagnoformat, fh.read(tiff.tagnosize))[0]
if tagno > 4096:
raise ValueError('suspicious number of tags')
except Exception:
raise ValueError('corrupted page list at offset %i' % self.offset)
# tags = {}
tagcodes = {273, 279, 324, 325} # TIFF.FRAME_TAGS
tagoffset = self.offset + tiff.tagnosize # fh.tell()
tagsize = tiff.tagsize
tagindex = -tagsize
codeformat = tiff.tagformat1[:2]
tagbytes = fh.read(tagsize * tagno)
for _ in range(tagno):
tagindex += tagsize
code = unpack(codeformat, tagbytes[tagindex:tagindex+2])[0]
if code not in tagcodes:
continue
try:
tag = TiffTag(parent, tagbytes[tagindex:tagindex+tagsize],
tagoffset+tagindex)
except TiffTag.Error as e:
logging.warning('TiffTag %i: %s', code, str(e))
continue
if code == 273 or code == 324:
setattr(self, 'dataoffsets', tag.value)
elif code == 279 or code == 325:
setattr(self, 'databytecounts', tag.value)
# elif code == 270:
# tagname = tag.name
# if tagname not in tags:
# tags[tagname] = bytes2str(tag.value)
# elif 'ImageDescription1' not in tags:
# tags['ImageDescription1'] = bytes2str(tag.value)
# else:
# tags[tag.name] = tag.value
def aspage(self):
"""Return TiffPage from file."""
self.parent.filehandle.seek(self.offset)
return TiffPage(self.parent, index=self.index, keyframe=None)
def asarray(self, *args, **kwargs):
"""Read image data from file and return as numpy array."""
# TODO: fix TypeError on Python 2
# "TypeError: unbound method asarray() must be called with TiffPage
# instance as first argument (got TiffFrame instance instead)"
kwargs['validate'] = False
return TiffPage.asarray(self, *args, **kwargs)
def asrgb(self, *args, **kwargs):
"""Read image data from file and return RGB image as numpy array."""
kwargs['validate'] = False
return TiffPage.asrgb(self, *args, **kwargs)
@property
def offsets_bytecounts(self):
"""Return simplified offsets and bytecounts."""
if self.keyframe.is_contiguous:
return self.dataoffsets[:1], self.keyframe.is_contiguous[1:]
if self.keyframe.is_tiled:
return self.dataoffsets, self.databytecounts
return clean_offsets_counts(self.dataoffsets, self.databytecounts)
@property
def is_contiguous(self):
"""Return offset and size of contiguous data, else None."""
if self.keyframe.is_contiguous:
return self.dataoffsets[0], self.keyframe.is_contiguous[1]
return None
@property
def is_memmappable(self):
"""Return if page's image data in file can be memory-mapped."""
return self.keyframe.is_memmappable
def __getattr__(self, name):
"""Return attribute from keyframe."""
if name in TIFF.FRAME_ATTRS:
return getattr(self.keyframe, name)
# this error could be raised because an AttributeError was
# raised inside a @property function
raise AttributeError("'%s' object has no attribute '%s'" %
(self.__class__.__name__, name))
def __str__(self, detail=0):
"""Return string containing information about frame."""
info = ' '.join(s for s in (
'x'.join(str(i) for i in self.shape),
str(self.dtype)))
return 'TiffFrame %i @%i %s' % (self.index, self.offset, info)
class TiffTag(object):
"""TIFF tag structure.
Attributes
----------
name : string
Name of tag.
code : int
Decimal code of tag.
dtype : str
Datatype of tag data. One of TIFF DATA_FORMATS.
count : int
Number of values.
value : various types
Tag data as Python object.
ImageSourceData : int
Location of value in file.
All attributes are read-only.
"""
__slots__ = ('code', 'count', 'dtype', 'value', 'valueoffset')
class Error(Exception):
"""Custom TiffTag error."""
def __init__(self, parent, tagheader, tagoffset):
"""Initialize instance from tag header."""
fh = parent.filehandle
tiff = parent.tiff
byteorder = tiff.byteorder
offsetsize = tiff.offsetsize
unpack = struct.unpack
self.valueoffset = tagoffset + offsetsize + 4
code, type_ = unpack(tiff.tagformat1, tagheader[:4])
count, value = unpack(tiff.tagformat2, tagheader[4:])
try:
dtype = TIFF.DATA_FORMATS[type_]
except KeyError:
raise TiffTag.Error('unknown tag data type %i' % type_)
fmt = '%s%i%s' % (byteorder, count * int(dtype[0]), dtype[1])
size = struct.calcsize(fmt)
if size > offsetsize or code in TIFF.TAG_READERS:
self.valueoffset = offset = unpack(tiff.offsetformat, value)[0]
if offset < 8 or offset > fh.size - size:
raise TiffTag.Error('invalid tag value offset')
# if offset % 2:
# logging.warning(
# 'TiffTag: value does not begin on word boundary')
fh.seek(offset)
if code in TIFF.TAG_READERS:
readfunc = TIFF.TAG_READERS[code]
value = readfunc(fh, byteorder, dtype, count, offsetsize)
elif type_ == 7 or (count > 1 and dtype[-1] == 'B'):
value = read_bytes(fh, byteorder, dtype, count, offsetsize)
elif code in TIFF.TAGS or dtype[-1] == 's':
value = unpack(fmt, fh.read(size))
else:
value = read_numpy(fh, byteorder, dtype, count, offsetsize)
elif dtype[-1] == 'B' or type_ == 7:
value = value[:size]
else:
value = unpack(fmt, value[:size])
process = (code not in TIFF.TAG_READERS and code not in TIFF.TAG_TUPLE
and type_ != 7)
if process and dtype[-1] == 's' and isinstance(value[0], bytes):
# TIFF ASCII fields can contain multiple strings,
# each terminated with a NUL
value = value[0]
try:
value = bytes2str(stripascii(value).strip())
except UnicodeDecodeError:
# TODO: this doesn't work on Python 2
logging.warning(
'TiffTag %i: coercing invalid ASCII to bytes', code)
dtype = '1B'
else:
if code in TIFF.TAG_ENUM:
t = TIFF.TAG_ENUM[code]
try:
value = tuple(t(v) for v in value)
except ValueError as e:
logging.warning('TiffTag %i: %s', code, str(e))
if process:
if len(value) == 1:
value = value[0]
self.code = code
self.dtype = dtype
self.count = count
self.value = value
@property
def name(self):
"""Return name of tag from TIFF.TAGS registry."""
try:
return TIFF.TAGS[self.code]
except KeyError:
return str(self.code)
def _fix_lsm_bitspersample(self, parent):
"""Correct LSM bitspersample tag.
Old LSM writers may use a separate region for two 16-bit values,
although they fit into the tag value element of the tag.
"""
if self.code == 258 and self.count == 2:
# TODO: test this case; need example file
logging.warning(
'TiffTag %i: correcting LSM bitspersample tag', self.code)
value = struct.pack(' 0:
data = fh.read(min(chunksize, size))
datasize = len(data)
if datasize == 0:
break
size -= datasize
data = numpy.frombuffer(data, dtype)
out[index:index+data.size] = data
index += data.size
if hasattr(out, 'flush'):
out.flush()
return out.reshape(shape)
def read_record(self, dtype, shape=1, byteorder=None):
"""Return numpy record from file."""
rec = numpy.rec
try:
record = rec.fromfile(self._fh, dtype, shape, byteorder=byteorder)
except Exception:
dtype = numpy.dtype(dtype)
if shape is None:
shape = self._size // dtype.itemsize
size = product(sequence(shape)) * dtype.itemsize
data = self._fh.read(size)
record = rec.fromstring(data, dtype, shape, byteorder=byteorder)
return record[0] if shape == 1 else record
def write_empty(self, size):
"""Append size bytes to file. Position must be at end of file."""
if size < 1:
return
self._fh.seek(size-1, 1)
self._fh.write(b'\x00')
def write_array(self, data):
"""Write numpy array to binary file."""
try:
data.tofile(self._fh)
except Exception:
# BytesIO
self._fh.write(data.tostring())
def tell(self):
"""Return file's current position."""
return self._fh.tell() - self._offset
def seek(self, offset, whence=0):
"""Set file's current position."""
if self._offset:
if whence == 0:
self._fh.seek(self._offset + offset, whence)
return
elif whence == 2 and self._size > 0:
self._fh.seek(self._offset + self._size + offset, 0)
return
self._fh.seek(offset, whence)
def close(self):
"""Close file."""
if self._close and self._fh:
self._fh.close()
self._fh = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def __getattr__(self, name):
"""Return attribute from underlying file object."""
if self._offset:
warnings.warn(
"FileHandle: '%s' not implemented for embedded files" % name)
return getattr(self._fh, name)
@property
def name(self):
return self._name
@property
def dirname(self):
return self._dir
@property
def path(self):
return os.path.join(self._dir, self._name)
@property
def size(self):
return self._size
@property
def closed(self):
return self._fh is None
@property
def lock(self):
return self._lock
@lock.setter
def lock(self, value):
self._lock = threading.RLock() if value else NullContext()
class NullContext(object):
"""Null context manager.
>>> with NullContext():
... pass
"""
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
class OpenFileCache(object):
"""Keep files open."""
__slots__ = ('files', 'past', 'lock', 'size')
def __init__(self, size, lock=None):
"""Initialize open file cache."""
self.past = [] # FIFO of opened files
self.files = {} # refcounts of opened files
self.lock = NullContext() if lock is None else lock
self.size = int(size)
def open(self, filehandle):
"""Re-open file if necessary."""
with self.lock:
if filehandle in self.files:
self.files[filehandle] += 1
elif filehandle.closed:
filehandle.open()
self.files[filehandle] = 1
self.past.append(filehandle)
def close(self, filehandle):
"""Close openend file if no longer used."""
with self.lock:
if filehandle in self.files:
self.files[filehandle] -= 1
# trim the file cache
index = 0
size = len(self.past)
while size > self.size and index < size:
filehandle = self.past[index]
if self.files[filehandle] == 0:
filehandle.close()
del self.files[filehandle]
del self.past[index]
size -= 1
else:
index += 1
def clear(self):
"""Close all opened files if not in use."""
with self.lock:
for filehandle, refcount in list(self.files.items()):
if refcount == 0:
filehandle.close()
del self.files[filehandle]
del self.past[self.past.index(filehandle)]
class LazyConst(object):
"""Class whose attributes are computed on first access from its methods."""
def __init__(self, cls):
self._cls = cls
self.__doc__ = getattr(cls, '__doc__')
def __getattr__(self, name):
func = getattr(self._cls, name)
if not callable(func):
return func
try:
value = func()
except TypeError:
# Python 2 unbound method
value = func.__func__()
setattr(self, name, value)
return value
@LazyConst
class TIFF(object):
"""Namespace for module constants."""
def CLASSIC_LE():
class ClassicTiffLe(object):
version = 42
byteorder = '<'
offsetsize = 4
offsetformat = '= 32768
32781: 'ImageID',
32931: 'WangTag1',
32932: 'WangAnnotation',
32933: 'WangTag3',
32934: 'WangTag4',
32953: 'ImageReferencePoints',
32954: 'RegionXformTackPoint',
32955: 'WarpQuadrilateral',
32956: 'AffineTransformMat',
32995: 'Matteing',
32996: 'DataType', # use SampleFormat
32997: 'ImageDepth',
32998: 'TileDepth',
33300: 'ImageFullWidth',
33301: 'ImageFullLength',
33302: 'TextureFormat',
33303: 'TextureWrapModes',
33304: 'FieldOfViewCotangent',
33305: 'MatrixWorldToScreen',
33306: 'MatrixWorldToCamera',
33405: 'Model2',
33421: 'CFARepeatPatternDim',
33422: 'CFAPattern',
33423: 'BatteryLevel',
33424: 'KodakIFD',
33434: 'ExposureTime',
33437: 'FNumber',
33432: 'Copyright',
33445: 'MDFileTag',
33446: 'MDScalePixel',
33447: 'MDColorTable',
33448: 'MDLabName',
33449: 'MDSampleInfo',
33450: 'MDPrepDate',
33451: 'MDPrepTime',
33452: 'MDFileUnits',
33471: 'OlympusINI',
33550: 'ModelPixelScaleTag',
33560: 'OlympusSIS', # see also 33471 and 34853
33589: 'AdventScale',
33590: 'AdventRevision',
33628: 'UIC1tag', # Metamorph Universal Imaging Corp STK
33629: 'UIC2tag',
33630: 'UIC3tag',
33631: 'UIC4tag',
33723: 'IPTCNAA',
33858: 'ExtendedTagsOffset', # DEFF points IFD with private tags
33918: 'IntergraphPacketData', # INGRPacketDataTag
33919: 'IntergraphFlagRegisters', # INGRFlagRegisters
33920: 'IntergraphMatrixTag', # IrasBTransformationMatrix
33921: 'INGRReserved',
33922: 'ModelTiepointTag',
33923: 'LeicaMagic',
34016: 'Site', # 34016..34032 ANSI IT8 TIFF/IT
34017: 'ColorSequence',
34018: 'IT8Header',
34019: 'RasterPadding',
34020: 'BitsPerRunLength',
34021: 'BitsPerExtendedRunLength',
34022: 'ColorTable',
34023: 'ImageColorIndicator',
34024: 'BackgroundColorIndicator',
34025: 'ImageColorValue',
34026: 'BackgroundColorValue',
34027: 'PixelIntensityRange',
34028: 'TransparencyIndicator',
34029: 'ColorCharacterization',
34030: 'HCUsage',
34031: 'TrapIndicator',
34032: 'CMYKEquivalent',
34118: 'CZ_SEM', # Zeiss SEM
34152: 'AFCP_IPTC',
34232: 'PixelMagicJBIGOptions', # EXIF, also TI FrameCount
34263: 'JPLCartoIFD',
34122: 'IPLAB', # number of images
34264: 'ModelTransformationTag',
34306: 'WB_GRGBLevels', # Leaf MOS
34310: 'LeafData',
34361: 'MM_Header',
34362: 'MM_Stamp',
34363: 'MM_Unknown',
34377: 'ImageResources', # Photoshop
34386: 'MM_UserBlock',
34412: 'CZ_LSMINFO',
34665: 'ExifTag',
34675: 'InterColorProfile', # ICCProfile
34680: 'FEI_SFEG', #
34682: 'FEI_HELIOS', #
34683: 'FEI_TITAN', #
34687: 'FXExtensions',
34688: 'MultiProfiles',
34689: 'SharedData',
34690: 'T88Options',
34710: 'MarCCD', # offset to MarCCD header
34732: 'ImageLayer',
34735: 'GeoKeyDirectoryTag',
34736: 'GeoDoubleParamsTag',
34737: 'GeoAsciiParamsTag',
34750: 'JBIGOptions',
34821: 'PIXTIFF', # ? Pixel Translations Inc
34850: 'ExposureProgram',
34852: 'SpectralSensitivity',
34853: 'GPSTag', # GPSIFD also OlympusSIS2
34855: 'ISOSpeedRatings',
34856: 'OECF',
34857: 'Interlace',
34858: 'TimeZoneOffset',
34859: 'SelfTimerMode',
34864: 'SensitivityType',
34865: 'StandardOutputSensitivity',
34866: 'RecommendedExposureIndex',
34867: 'ISOSpeed',
34868: 'ISOSpeedLatitudeyyy',
34869: 'ISOSpeedLatitudezzz',
34908: 'HylaFAXFaxRecvParams',
34909: 'HylaFAXFaxSubAddress',
34910: 'HylaFAXFaxRecvTime',
34911: 'FaxDcs',
34929: 'FedexEDR',
34954: 'LeafSubIFD',
34959: 'Aphelion1',
34960: 'Aphelion2',
34961: 'AphelionInternal', # ADCIS
36864: 'ExifVersion',
36867: 'DateTimeOriginal',
36868: 'DateTimeDigitized',
36873: 'GooglePlusUploadCode',
36880: 'OffsetTime',
36881: 'OffsetTimeOriginal',
36882: 'OffsetTimeDigitized',
# TODO: Pilatus/CHESS/TV6 36864..37120 conflicting with Exif tags
# 36864: 'TVX ?',
# 36865: 'TVX_NumExposure',
# 36866: 'TVX_NumBackground',
# 36867: 'TVX_ExposureTime',
# 36868: 'TVX_BackgroundTime',
# 36870: 'TVX ?',
# 36873: 'TVX_SubBpp',
# 36874: 'TVX_SubWide',
# 36875: 'TVX_SubHigh',
# 36876: 'TVX_BlackLevel',
# 36877: 'TVX_DarkCurrent',
# 36878: 'TVX_ReadNoise',
# 36879: 'TVX_DarkCurrentNoise',
# 36880: 'TVX_BeamMonitor',
# 37120: 'TVX_UserVariables', # A/D values
37121: 'ComponentsConfiguration',
37122: 'CompressedBitsPerPixel',
37377: 'ShutterSpeedValue',
37378: 'ApertureValue',
37379: 'BrightnessValue',
37380: 'ExposureBiasValue',
37381: 'MaxApertureValue',
37382: 'SubjectDistance',
37383: 'MeteringMode',
37384: 'LightSource',
37385: 'Flash',
37386: 'FocalLength',
37387: 'FlashEnergy_', # 37387
37388: 'SpatialFrequencyResponse_', # 37388
37389: 'Noise',
37390: 'FocalPlaneXResolution',
37391: 'FocalPlaneYResolution',
37392: 'FocalPlaneResolutionUnit',
37393: 'ImageNumber',
37394: 'SecurityClassification',
37395: 'ImageHistory',
37396: 'SubjectLocation',
37397: 'ExposureIndex',
37398: 'TIFFEPStandardID',
37399: 'SensingMethod',
37434: 'CIP3DataFile',
37435: 'CIP3Sheet',
37436: 'CIP3Side',
37439: 'StoNits',
37500: 'MakerNote',
37510: 'UserComment',
37520: 'SubsecTime',
37521: 'SubsecTimeOriginal',
37522: 'SubsecTimeDigitized',
37679: 'MODIText', # Microsoft Office Document Imaging
37680: 'MODIOLEPropertySetStorage',
37681: 'MODIPositioning',
37706: 'TVIPS', # offset to TemData structure
37707: 'TVIPS1',
37708: 'TVIPS2', # same TemData structure as undefined
37724: 'ImageSourceData', # Photoshop
37888: 'Temperature',
37889: 'Humidity',
37890: 'Pressure',
37891: 'WaterDepth',
37892: 'Acceleration',
37893: 'CameraElevationAngle',
40001: 'MC_IpWinScal', # Media Cybernetics
# 40001: 'RecipName', # MS FAX
40002: 'RecipNumber',
40003: 'SenderName',
40004: 'Routing',
40005: 'CallerId',
40006: 'TSID',
40007: 'CSID',
40008: 'FaxTime',
40100: 'MC_IdOld',
40106: 'MC_Unknown',
40965: 'InteroperabilityTag', # InteropOffset
40091: 'XPTitle',
40092: 'XPComment',
40093: 'XPAuthor',
40094: 'XPKeywords',
40095: 'XPSubject',
40960: 'FlashpixVersion',
40961: 'ColorSpace',
40962: 'PixelXDimension',
40963: 'PixelYDimension',
40964: 'RelatedSoundFile',
40976: 'SamsungRawPointersOffset',
40977: 'SamsungRawPointersLength',
41217: 'SamsungRawByteOrder',
41218: 'SamsungRawUnknown',
41483: 'FlashEnergy',
41484: 'SpatialFrequencyResponse',
41485: 'Noise_', # 37389
41486: 'FocalPlaneXResolution_', # 37390
41487: 'FocalPlaneYResolution_', # 37391
41488: 'FocalPlaneResolutionUnit_', # 37392
41489: 'ImageNumber_', # 37393
41490: 'SecurityClassification_', # 37394
41491: 'ImageHistory_', # 37395
41492: 'SubjectLocation_', # 37395
41493: 'ExposureIndex_ ', # 37397
41494: 'TIFF-EPStandardID',
41495: 'SensingMethod_', # 37399
41728: 'FileSource',
41729: 'SceneType',
41730: 'CFAPattern_', # 33422
41985: 'CustomRendered',
41986: 'ExposureMode',
41987: 'WhiteBalance',
41988: 'DigitalZoomRatio',
41989: 'FocalLengthIn35mmFilm',
41990: 'SceneCaptureType',
41991: 'GainControl',
41992: 'Contrast',
41993: 'Saturation',
41994: 'Sharpness',
41995: 'DeviceSettingDescription',
41996: 'SubjectDistanceRange',
42016: 'ImageUniqueID',
42032: 'CameraOwnerName',
42033: 'BodySerialNumber',
42034: 'LensSpecification',
42035: 'LensMake',
42036: 'LensModel',
42037: 'LensSerialNumber',
42112: 'GDAL_METADATA',
42113: 'GDAL_NODATA',
42240: 'Gamma',
43314: 'NIHImageHeader',
44992: 'ExpandSoftware',
44993: 'ExpandLens',
44994: 'ExpandFilm',
44995: 'ExpandFilterLens',
44996: 'ExpandScanner',
44997: 'ExpandFlashLamp',
48129: 'PixelFormat', # HDP and WDP
48130: 'Transformation',
48131: 'Uncompressed',
48132: 'ImageType',
48256: 'ImageWidth_', # 256
48257: 'ImageHeight_',
48258: 'WidthResolution',
48259: 'HeightResolution',
48320: 'ImageOffset',
48321: 'ImageByteCount',
48322: 'AlphaOffset',
48323: 'AlphaByteCount',
48324: 'ImageDataDiscard',
48325: 'AlphaDataDiscard',
50003: 'KodakAPP3',
50215: 'OceScanjobDescription',
50216: 'OceApplicationSelector',
50217: 'OceIdentificationNumber',
50218: 'OceImageLogicCharacteristics',
50255: 'Annotations',
50288: 'MC_Id', # Media Cybernetics
50289: 'MC_XYPosition',
50290: 'MC_ZPosition',
50291: 'MC_XYCalibration',
50292: 'MC_LensCharacteristics',
50293: 'MC_ChannelName',
50294: 'MC_ExcitationWavelength',
50295: 'MC_TimeStamp',
50296: 'MC_FrameProperties',
50341: 'PrintImageMatching',
50495: 'PCO_RAW', # TODO: PCO CamWare
50547: 'OriginalFileName',
50560: 'USPTO_OriginalContentType', # US Patent Office
50561: 'USPTO_RotationCode',
50648: 'CR2Unknown1',
50649: 'CR2Unknown2',
50656: 'CR2CFAPattern',
50674: 'LercParameters', # ESGI 50674 .. 50677
50706: 'DNGVersion', # DNG 50706 .. 51112
50707: 'DNGBackwardVersion',
50708: 'UniqueCameraModel',
50709: 'LocalizedCameraModel',
50710: 'CFAPlaneColor',
50711: 'CFALayout',
50712: 'LinearizationTable',
50713: 'BlackLevelRepeatDim',
50714: 'BlackLevel',
50715: 'BlackLevelDeltaH',
50716: 'BlackLevelDeltaV',
50717: 'WhiteLevel',
50718: 'DefaultScale',
50719: 'DefaultCropOrigin',
50720: 'DefaultCropSize',
50721: 'ColorMatrix1',
50722: 'ColorMatrix2',
50723: 'CameraCalibration1',
50724: 'CameraCalibration2',
50725: 'ReductionMatrix1',
50726: 'ReductionMatrix2',
50727: 'AnalogBalance',
50728: 'AsShotNeutral',
50729: 'AsShotWhiteXY',
50730: 'BaselineExposure',
50731: 'BaselineNoise',
50732: 'BaselineSharpness',
50733: 'BayerGreenSplit',
50734: 'LinearResponseLimit',
50735: 'CameraSerialNumber',
50736: 'LensInfo',
50737: 'ChromaBlurRadius',
50738: 'AntiAliasStrength',
50739: 'ShadowScale',
50740: 'DNGPrivateData',
50741: 'MakerNoteSafety',
50752: 'RawImageSegmentation',
50778: 'CalibrationIlluminant1',
50779: 'CalibrationIlluminant2',
50780: 'BestQualityScale',
50781: 'RawDataUniqueID',
50784: 'AliasLayerMetadata',
50827: 'OriginalRawFileName',
50828: 'OriginalRawFileData',
50829: 'ActiveArea',
50830: 'MaskedAreas',
50831: 'AsShotICCProfile',
50832: 'AsShotPreProfileMatrix',
50833: 'CurrentICCProfile',
50834: 'CurrentPreProfileMatrix',
50838: 'IJMetadataByteCounts',
50839: 'IJMetadata',
50844: 'RPCCoefficientTag',
50879: 'ColorimetricReference',
50885: 'SRawType',
50898: 'PanasonicTitle',
50899: 'PanasonicTitle2',
50908: 'RSID', # DGIWG
50909: 'GEO_METADATA', # DGIWG XML
50931: 'CameraCalibrationSignature',
50932: 'ProfileCalibrationSignature',
50933: 'ProfileIFD',
50934: 'AsShotProfileName',
50935: 'NoiseReductionApplied',
50936: 'ProfileName',
50937: 'ProfileHueSatMapDims',
50938: 'ProfileHueSatMapData1',
50939: 'ProfileHueSatMapData2',
50940: 'ProfileToneCurve',
50941: 'ProfileEmbedPolicy',
50942: 'ProfileCopyright',
50964: 'ForwardMatrix1',
50965: 'ForwardMatrix2',
50966: 'PreviewApplicationName',
50967: 'PreviewApplicationVersion',
50968: 'PreviewSettingsName',
50969: 'PreviewSettingsDigest',
50970: 'PreviewColorSpace',
50971: 'PreviewDateTime',
50972: 'RawImageDigest',
50973: 'OriginalRawFileDigest',
50974: 'SubTileBlockSize',
50975: 'RowInterleaveFactor',
50981: 'ProfileLookTableDims',
50982: 'ProfileLookTableData',
51008: 'OpcodeList1',
51009: 'OpcodeList2',
51022: 'OpcodeList3',
51023: 'FibicsXML', #
51041: 'NoiseProfile',
51043: 'TimeCodes',
51044: 'FrameRate',
51058: 'TStop',
51081: 'ReelName',
51089: 'OriginalDefaultFinalSize',
51090: 'OriginalBestQualitySize',
51091: 'OriginalDefaultCropSize',
51105: 'CameraLabel',
51107: 'ProfileHueSatMapEncoding',
51108: 'ProfileLookTableEncoding',
51109: 'BaselineExposureOffset',
51110: 'DefaultBlackRender',
51111: 'NewRawImageDigest',
51112: 'RawToPreviewGain',
51125: 'DefaultUserCrop',
51123: 'MicroManagerMetadata',
51159: 'ZIFmetadata', # Objective Pathology Services
51160: 'ZIFannotations', # Objective Pathology Services
59932: 'Padding',
59933: 'OffsetSchema',
# Reusable Tags 65000-65535
# 65000: Dimap_Document XML
# 65000-65112: Photoshop Camera RAW EXIF tags
# 65000: 'OwnerName',
# 65001: 'SerialNumber',
# 65002: 'Lens',
# 65024: 'KDC_IFD',
# 65100: 'RawFile',
# 65101: 'Converter',
# 65102: 'WhiteBalance',
# 65105: 'Exposure',
# 65106: 'Shadows',
# 65107: 'Brightness',
# 65108: 'Contrast',
# 65109: 'Saturation',
# 65110: 'Sharpness',
# 65111: 'Smoothness',
# 65112: 'MoireFilter',
65200: 'FlexXML',
}
def TAG_NAMES():
return {v: c for c, v in TIFF.TAGS.items()}
def TAG_READERS():
# Map TIFF tag codes to import functions
return {
320: read_colormap,
# 700: read_bytes, # read_utf8,
# 34377: read_bytes,
33723: read_bytes,
# 34675: read_bytes,
33628: read_uic1tag, # Universal Imaging Corp STK
33629: read_uic2tag,
33630: read_uic3tag,
33631: read_uic4tag,
34118: read_cz_sem, # Carl Zeiss SEM
34361: read_mm_header, # Olympus FluoView
34362: read_mm_stamp,
34363: read_numpy, # MM_Unknown
34386: read_numpy, # MM_UserBlock
34412: read_cz_lsminfo, # Carl Zeiss LSM
34680: read_fei_metadata, # S-FEG
34682: read_fei_metadata, # Helios NanoLab
37706: read_tvips_header, # TVIPS EMMENU
37724: read_bytes, # ImageSourceData
33923: read_bytes, # read_leica_magic
43314: read_nih_image_header,
# 40001: read_bytes,
40100: read_bytes,
50288: read_bytes,
50296: read_bytes,
50839: read_bytes,
51123: read_json,
33471: read_sis_ini,
33560: read_sis,
34665: read_exif_ifd,
34853: read_gps_ifd, # conflicts with OlympusSIS
40965: read_interoperability_ifd,
}
def TAG_TUPLE():
# Tags whose values must be stored as tuples
return frozenset((273, 279, 324, 325, 330, 530, 531, 34736))
def TAG_ATTRIBUTES():
# Map tag codes to TiffPage attribute names
return {
'ImageWidth': 'imagewidth',
'ImageLength': 'imagelength',
'BitsPerSample': 'bitspersample',
'Compression': 'compression',
'PlanarConfiguration': 'planarconfig',
'FillOrder': 'fillorder',
'PhotometricInterpretation': 'photometric',
'ColorMap': 'colormap',
'ImageDescription': 'description',
'ImageDescription1': 'description1',
'SamplesPerPixel': 'samplesperpixel',
'RowsPerStrip': 'rowsperstrip',
'Software': 'software',
'Predictor': 'predictor',
'TileWidth': 'tilewidth',
'TileLength': 'tilelength',
'ExtraSamples': 'extrasamples',
'SampleFormat': 'sampleformat',
'ImageDepth': 'imagedepth',
'TileDepth': 'tiledepth',
'NewSubfileType': 'subfiletype',
}
def TAG_ENUM():
return {
# 254: TIFF.FILETYPE,
255: TIFF.OFILETYPE,
259: TIFF.COMPRESSION,
262: TIFF.PHOTOMETRIC,
263: TIFF.THRESHHOLD,
266: TIFF.FILLORDER,
274: TIFF.ORIENTATION,
284: TIFF.PLANARCONFIG,
290: TIFF.GRAYRESPONSEUNIT,
# 292: TIFF.GROUP3OPT,
# 293: TIFF.GROUP4OPT,
296: TIFF.RESUNIT,
300: TIFF.COLORRESPONSEUNIT,
317: TIFF.PREDICTOR,
338: TIFF.EXTRASAMPLE,
339: TIFF.SAMPLEFORMAT,
# 512: TIFF.JPEGPROC,
# 531: TIFF.YCBCRPOSITION,
}
def FILETYPE():
class FILETYPE(enum.IntFlag):
# Python 3.6 only
UNDEFINED = 0
REDUCEDIMAGE = 1
PAGE = 2
MASK = 4
return FILETYPE
def OFILETYPE():
class OFILETYPE(enum.IntEnum):
UNDEFINED = 0
IMAGE = 1
REDUCEDIMAGE = 2
PAGE = 3
return OFILETYPE
def COMPRESSION():
class COMPRESSION(enum.IntEnum):
NONE = 1 # Uncompressed
CCITTRLE = 2 # CCITT 1D
CCITT_T4 = 3 # 'T4/Group 3 Fax',
CCITT_T6 = 4 # 'T6/Group 4 Fax',
LZW = 5
OJPEG = 6 # old-style JPEG
JPEG = 7
ADOBE_DEFLATE = 8
JBIG_BW = 9
JBIG_COLOR = 10
JPEG_99 = 99
KODAK_262 = 262
NEXT = 32766
SONY_ARW = 32767
PACKED_RAW = 32769
SAMSUNG_SRW = 32770
CCIRLEW = 32771
SAMSUNG_SRW2 = 32772
PACKBITS = 32773
THUNDERSCAN = 32809
IT8CTPAD = 32895
IT8LW = 32896
IT8MP = 32897
IT8BL = 32898
PIXARFILM = 32908
PIXARLOG = 32909
DEFLATE = 32946
DCS = 32947
APERIO_JP2000_YCBC = 33003 # Leica Aperio
APERIO_JP2000_RGB = 33005 # Leica Aperio
JBIG = 34661
SGILOG = 34676
SGILOG24 = 34677
JPEG2000 = 34712
NIKON_NEF = 34713
JBIG2 = 34715
MDI_BINARY = 34718 # Microsoft Document Imaging
MDI_PROGRESSIVE = 34719 # Microsoft Document Imaging
MDI_VECTOR = 34720 # Microsoft Document Imaging
LERC = 34887 # ESRI Lerc
JPEG_LOSSY = 34892
LZMA = 34925
ZSTD_DEPRECATED = 34926
WEBP_DEPRECATED = 34927
OPS_PNG = 34933 # Objective Pathology Services
OPS_JPEGXR = 34934 # Objective Pathology Services
ZSTD = 50000
WEBP = 50001
PIXTIFF = 50013
KODAK_DCR = 65000
PENTAX_PEF = 65535
# def __bool__(self): return self != 1 # Python 3.6+ only
return COMPRESSION
def PHOTOMETRIC():
class PHOTOMETRIC(enum.IntEnum):
MINISWHITE = 0
MINISBLACK = 1
RGB = 2
PALETTE = 3
MASK = 4
SEPARATED = 5 # CMYK
YCBCR = 6
CIELAB = 8
ICCLAB = 9
ITULAB = 10
CFA = 32803 # Color Filter Array
LOGL = 32844
LOGLUV = 32845
LINEAR_RAW = 34892
return PHOTOMETRIC
def THRESHHOLD():
class THRESHHOLD(enum.IntEnum):
BILEVEL = 1
HALFTONE = 2
ERRORDIFFUSE = 3
return THRESHHOLD
def FILLORDER():
class FILLORDER(enum.IntEnum):
MSB2LSB = 1
LSB2MSB = 2
return FILLORDER
def ORIENTATION():
class ORIENTATION(enum.IntEnum):
TOPLEFT = 1
TOPRIGHT = 2
BOTRIGHT = 3
BOTLEFT = 4
LEFTTOP = 5
RIGHTTOP = 6
RIGHTBOT = 7
LEFTBOT = 8
return ORIENTATION
def PLANARCONFIG():
class PLANARCONFIG(enum.IntEnum):
CONTIG = 1
SEPARATE = 2
return PLANARCONFIG
def GRAYRESPONSEUNIT():
class GRAYRESPONSEUNIT(enum.IntEnum):
_10S = 1
_100S = 2
_1000S = 3
_10000S = 4
_100000S = 5
return GRAYRESPONSEUNIT
def GROUP4OPT():
class GROUP4OPT(enum.IntEnum):
UNCOMPRESSED = 2
return GROUP4OPT
def RESUNIT():
class RESUNIT(enum.IntEnum):
NONE = 1
INCH = 2
CENTIMETER = 3
# def __bool__(self): return self != 1 # Python 3.6 only
return RESUNIT
def COLORRESPONSEUNIT():
class COLORRESPONSEUNIT(enum.IntEnum):
_10S = 1
_100S = 2
_1000S = 3
_10000S = 4
_100000S = 5
return COLORRESPONSEUNIT
def PREDICTOR():
class PREDICTOR(enum.IntEnum):
NONE = 1
HORIZONTAL = 2
FLOATINGPOINT = 3
# def __bool__(self): return self != 1 # Python 3.6 only
return PREDICTOR
def EXTRASAMPLE():
class EXTRASAMPLE(enum.IntEnum):
UNSPECIFIED = 0
ASSOCALPHA = 1
UNASSALPHA = 2
return EXTRASAMPLE
def SAMPLEFORMAT():
class SAMPLEFORMAT(enum.IntEnum):
UINT = 1
INT = 2
IEEEFP = 3
VOID = 4
COMPLEXINT = 5
COMPLEXIEEEFP = 6
return SAMPLEFORMAT
def DATATYPES():
class DATATYPES(enum.IntEnum):
NOTYPE = 0
BYTE = 1
ASCII = 2
SHORT = 3
LONG = 4
RATIONAL = 5
SBYTE = 6
UNDEFINED = 7
SSHORT = 8
SLONG = 9
SRATIONAL = 10
FLOAT = 11
DOUBLE = 12
IFD = 13
UNICODE = 14
COMPLEX = 15
LONG8 = 16
SLONG8 = 17
IFD8 = 18
return DATATYPES
def DATA_FORMATS():
# Map TIFF DATATYPES to Python struct formats
return {
1: '1B', # BYTE 8-bit unsigned integer.
2: '1s', # ASCII 8-bit byte that contains a 7-bit ASCII code;
# the last byte must be NULL (binary zero).
3: '1H', # SHORT 16-bit (2-byte) unsigned integer
4: '1I', # LONG 32-bit (4-byte) unsigned integer.
5: '2I', # RATIONAL Two LONGs: the first represents the numerator
# of a fraction; the second, the denominator.
6: '1b', # SBYTE An 8-bit signed (twos-complement) integer.
7: '1B', # UNDEFINED An 8-bit byte that may contain anything,
# depending on the definition of the field.
8: '1h', # SSHORT A 16-bit (2-byte) signed (twos-complement)
# integer.
9: '1i', # SLONG A 32-bit (4-byte) signed (twos-complement)
# integer.
10: '2i', # SRATIONAL Two SLONGs: the first represents the
# numerator of a fraction, the second the denominator.
11: '1f', # FLOAT Single precision (4-byte) IEEE format.
12: '1d', # DOUBLE Double precision (8-byte) IEEE format.
13: '1I', # IFD unsigned 4 byte IFD offset.
# 14: '', # UNICODE
# 15: '', # COMPLEX
16: '1Q', # LONG8 unsigned 8 byte integer (BigTiff)
17: '1q', # SLONG8 signed 8 byte integer (BigTiff)
18: '1Q', # IFD8 unsigned 8 byte IFD offset (BigTiff)
}
def DATA_DTYPES():
# Map numpy dtypes to TIFF DATATYPES
return {'B': 1, 's': 2, 'H': 3, 'I': 4, '2I': 5, 'b': 6,
'h': 8, 'i': 9, '2i': 10, 'f': 11, 'd': 12, 'Q': 16, 'q': 17}
def SAMPLE_DTYPES():
# Map TIFF SampleFormats and BitsPerSample to numpy dtype
return {
# UINT
(1, 1): '?', # bitmap
(1, 2): 'B',
(1, 3): 'B',
(1, 4): 'B',
(1, 5): 'B',
(1, 6): 'B',
(1, 7): 'B',
(1, 8): 'B',
(1, 9): 'H',
(1, 10): 'H',
(1, 11): 'H',
(1, 12): 'H',
(1, 13): 'H',
(1, 14): 'H',
(1, 15): 'H',
(1, 16): 'H',
(1, 17): 'I',
(1, 18): 'I',
(1, 19): 'I',
(1, 20): 'I',
(1, 21): 'I',
(1, 22): 'I',
(1, 23): 'I',
(1, 24): 'I',
(1, 25): 'I',
(1, 26): 'I',
(1, 27): 'I',
(1, 28): 'I',
(1, 29): 'I',
(1, 30): 'I',
(1, 31): 'I',
(1, 32): 'I',
(1, 64): 'Q',
# VOID : treat as UINT
(4, 1): '?', # bitmap
(4, 2): 'B',
(4, 3): 'B',
(4, 4): 'B',
(4, 5): 'B',
(4, 6): 'B',
(4, 7): 'B',
(4, 8): 'B',
(4, 9): 'H',
(4, 10): 'H',
(4, 11): 'H',
(4, 12): 'H',
(4, 13): 'H',
(4, 14): 'H',
(4, 15): 'H',
(4, 16): 'H',
(4, 17): 'I',
(4, 18): 'I',
(4, 19): 'I',
(4, 20): 'I',
(4, 21): 'I',
(4, 22): 'I',
(4, 23): 'I',
(4, 24): 'I',
(4, 25): 'I',
(4, 26): 'I',
(4, 27): 'I',
(4, 28): 'I',
(4, 29): 'I',
(4, 30): 'I',
(4, 31): 'I',
(4, 32): 'I',
(4, 64): 'Q',
# INT
(2, 8): 'b',
(2, 16): 'h',
(2, 32): 'i',
(2, 64): 'q',
# IEEEFP : 24 bit not supported by numpy
(3, 16): 'e',
# (3, 24): '', #
(3, 32): 'f',
(3, 64): 'd',
# COMPLEXIEEEFP
(6, 64): 'F',
(6, 128): 'D',
# RGB565
(1, (5, 6, 5)): 'B',
# COMPLEXINT : not supported by numpy
}
def PREDICTORS():
# Map PREDICTOR to predictor encode functions
if imagecodecs is None:
return {
None: identityfunc,
1: identityfunc,
2: delta_encode,
}
return {
None: imagecodecs.none_encode,
1: imagecodecs.none_encode,
2: imagecodecs.delta_encode,
3: imagecodecs.floatpred_encode,
}
def UNPREDICTORS():
# Map PREDICTOR to predictor decode functions
if imagecodecs is None:
return {
None: identityfunc,
1: identityfunc,
2: delta_decode,
}
return {
None: imagecodecs.none_decode,
1: imagecodecs.none_decode,
2: imagecodecs.delta_decode,
3: imagecodecs.floatpred_decode,
}
def COMPESSORS():
# Map COMPRESSION to compress functions
if imagecodecs is None:
import zlib
# import lzma
return {
None: identityfunc,
1: identityfunc,
8: zlib.compress,
32946: zlib.compress,
# 34925: lzma.compress
}
return {
None: imagecodecs.none_encode,
1: imagecodecs.none_encode,
8: imagecodecs.zlib_encode,
32946: imagecodecs.zlib_encode,
32773: imagecodecs.packbits_encode,
34925: imagecodecs.lzma_encode,
50000: imagecodecs.zstd_encode,
50001: imagecodecs.webp_encode
}
def DECOMPESSORS():
# Map COMPRESSION to decompress functions
if imagecodecs is None:
import zlib
# import lzma
return {
None: identityfunc,
1: identityfunc,
8: zlib.decompress,
32946: zlib.decompress,
# 34925: lzma.decompress
}
return {
None: imagecodecs.none_decode,
1: imagecodecs.none_decode,
5: imagecodecs.lzw_decode,
7: imagecodecs.jpeg_decode,
8: imagecodecs.zlib_decode,
32946: imagecodecs.zlib_decode,
32773: imagecodecs.packbits_decode,
# 34892: imagecodecs.jpeg_decode, # DNG lossy
34925: imagecodecs.lzma_decode,
34926: imagecodecs.zstd_decode, # deprecated
34927: imagecodecs.webp_decode, # deprecated
33003: imagecodecs.j2k_decode,
33005: imagecodecs.j2k_decode,
34712: imagecodecs.j2k_decode,
34933: imagecodecs.png_decode,
34934: imagecodecs.jxr_decode,
50000: imagecodecs.zstd_decode,
50001: imagecodecs.webp_decode,
}
def FRAME_ATTRS():
# Attributes that a TiffFrame shares with its keyframe
return set('shape ndim size dtype axes is_final'.split())
def FILE_FLAGS():
# TiffFile and TiffPage 'is_\*' attributes
exclude = set('reduced mask final memmappable '
'contiguous tiled chroma_subsampled'.split())
return set(a[3:] for a in dir(TiffPage)
if a[:3] == 'is_' and a[3:] not in exclude)
def FILE_EXTENSIONS():
# TIFF file extensions
return tuple('tif tiff ome.tif lsm stk qpi pcoraw '
'gel seq svs zif ndpi bif tf8 tf2 btf'.split())
def FILEOPEN_FILTER():
# String for use in Windows File Open box
return [('%s files' % ext.upper(), '*.%s' % ext)
for ext in TIFF.FILE_EXTENSIONS] + [('allfiles', '*')]
def AXES_LABELS():
# TODO: is there a standard for character axes labels?
axes = {
'X': 'width',
'Y': 'height',
'Z': 'depth',
'S': 'sample', # rgb(a)
'I': 'series', # general sequence, plane, page, IFD
'T': 'time',
'C': 'channel', # color, emission wavelength
'A': 'angle',
'P': 'phase', # formerly F # P is Position in LSM!
'R': 'tile', # region, point, mosaic
'H': 'lifetime', # histogram
'E': 'lambda', # excitation wavelength
'L': 'exposure', # lux
'V': 'event',
'Q': 'other',
'M': 'mosaic', # LSM 6
}
axes.update(dict((v, k) for k, v in axes.items()))
return axes
def NDPI_TAGS():
# 65420 - 65458 Private Hamamatsu NDPI tags
tags = dict((code, str(code)) for code in range(65420, 65459))
tags.update({
65420: 'FileFormat',
65421: 'Magnification', # SourceLens
65422: 'XOffsetFromSlideCentre',
65423: 'YOffsetFromSlideCentre',
65424: 'ZOffsetFromSlideCentre',
65427: 'UserLabel',
65428: 'AuthCode', # ?
65442: 'ScannerSerialNumber',
65449: 'Comments',
65447: 'BlankLanes',
65434: 'Fluorescence',
})
return tags
def EXIF_TAGS():
tags = {
# 65000 - 65112 Photoshop Camera RAW EXIF tags
65000: 'OwnerName',
65001: 'SerialNumber',
65002: 'Lens',
65100: 'RawFile',
65101: 'Converter',
65102: 'WhiteBalance',
65105: 'Exposure',
65106: 'Shadows',
65107: 'Brightness',
65108: 'Contrast',
65109: 'Saturation',
65110: 'Sharpness',
65111: 'Smoothness',
65112: 'MoireFilter',
}
tags.update(TIFF.TAGS)
return tags
def GPS_TAGS():
return {
0: 'GPSVersionID',
1: 'GPSLatitudeRef',
2: 'GPSLatitude',
3: 'GPSLongitudeRef',
4: 'GPSLongitude',
5: 'GPSAltitudeRef',
6: 'GPSAltitude',
7: 'GPSTimeStamp',
8: 'GPSSatellites',
9: 'GPSStatus',
10: 'GPSMeasureMode',
11: 'GPSDOP',
12: 'GPSSpeedRef',
13: 'GPSSpeed',
14: 'GPSTrackRef',
15: 'GPSTrack',
16: 'GPSImgDirectionRef',
17: 'GPSImgDirection',
18: 'GPSMapDatum',
19: 'GPSDestLatitudeRef',
20: 'GPSDestLatitude',
21: 'GPSDestLongitudeRef',
22: 'GPSDestLongitude',
23: 'GPSDestBearingRef',
24: 'GPSDestBearing',
25: 'GPSDestDistanceRef',
26: 'GPSDestDistance',
27: 'GPSProcessingMethod',
28: 'GPSAreaInformation',
29: 'GPSDateStamp',
30: 'GPSDifferential',
31: 'GPSHPositioningError',
}
def IOP_TAGS():
return {
1: 'InteroperabilityIndex',
2: 'InteroperabilityVersion',
4096: 'RelatedImageFileFormat',
4097: 'RelatedImageWidth',
4098: 'RelatedImageLength',
}
def GEO_KEYS():
return {
1024: 'GTModelTypeGeoKey',
1025: 'GTRasterTypeGeoKey',
1026: 'GTCitationGeoKey',
2048: 'GeographicTypeGeoKey',
2049: 'GeogCitationGeoKey',
2050: 'GeogGeodeticDatumGeoKey',
2051: 'GeogPrimeMeridianGeoKey',
2052: 'GeogLinearUnitsGeoKey',
2053: 'GeogLinearUnitSizeGeoKey',
2054: 'GeogAngularUnitsGeoKey',
2055: 'GeogAngularUnitsSizeGeoKey',
2056: 'GeogEllipsoidGeoKey',
2057: 'GeogSemiMajorAxisGeoKey',
2058: 'GeogSemiMinorAxisGeoKey',
2059: 'GeogInvFlatteningGeoKey',
2060: 'GeogAzimuthUnitsGeoKey',
2061: 'GeogPrimeMeridianLongGeoKey',
2062: 'GeogTOWGS84GeoKey',
3059: 'ProjLinearUnitsInterpCorrectGeoKey', # GDAL
3072: 'ProjectedCSTypeGeoKey',
3073: 'PCSCitationGeoKey',
3074: 'ProjectionGeoKey',
3075: 'ProjCoordTransGeoKey',
3076: 'ProjLinearUnitsGeoKey',
3077: 'ProjLinearUnitSizeGeoKey',
3078: 'ProjStdParallel1GeoKey',
3079: 'ProjStdParallel2GeoKey',
3080: 'ProjNatOriginLongGeoKey',
3081: 'ProjNatOriginLatGeoKey',
3082: 'ProjFalseEastingGeoKey',
3083: 'ProjFalseNorthingGeoKey',
3084: 'ProjFalseOriginLongGeoKey',
3085: 'ProjFalseOriginLatGeoKey',
3086: 'ProjFalseOriginEastingGeoKey',
3087: 'ProjFalseOriginNorthingGeoKey',
3088: 'ProjCenterLongGeoKey',
3089: 'ProjCenterLatGeoKey',
3090: 'ProjCenterEastingGeoKey',
3091: 'ProjFalseOriginNorthingGeoKey',
3092: 'ProjScaleAtNatOriginGeoKey',
3093: 'ProjScaleAtCenterGeoKey',
3094: 'ProjAzimuthAngleGeoKey',
3095: 'ProjStraightVertPoleLongGeoKey',
3096: 'ProjRectifiedGridAngleGeoKey',
4096: 'VerticalCSTypeGeoKey',
4097: 'VerticalCitationGeoKey',
4098: 'VerticalDatumGeoKey',
4099: 'VerticalUnitsGeoKey',
}
def GEO_CODES():
try:
from .tifffile_geodb import GEO_CODES # delayed import
except (ImportError, ValueError):
try:
from tifffile_geodb import GEO_CODES # delayed import
except (ImportError, ValueError):
GEO_CODES = {}
return GEO_CODES
def CZ_LSMINFO():
return [
('MagicNumber', 'u4'),
('StructureSize', 'i4'),
('DimensionX', 'i4'),
('DimensionY', 'i4'),
('DimensionZ', 'i4'),
('DimensionChannels', 'i4'),
('DimensionTime', 'i4'),
('DataType', 'i4'), # DATATYPES
('ThumbnailX', 'i4'),
('ThumbnailY', 'i4'),
('VoxelSizeX', 'f8'),
('VoxelSizeY', 'f8'),
('VoxelSizeZ', 'f8'),
('OriginX', 'f8'),
('OriginY', 'f8'),
('OriginZ', 'f8'),
('ScanType', 'u2'),
('SpectralScan', 'u2'),
('TypeOfData', 'u4'), # TYPEOFDATA
('OffsetVectorOverlay', 'u4'),
('OffsetInputLut', 'u4'),
('OffsetOutputLut', 'u4'),
('OffsetChannelColors', 'u4'),
('TimeIntervall', 'f8'),
('OffsetChannelDataTypes', 'u4'),
('OffsetScanInformation', 'u4'), # SCANINFO
('OffsetKsData', 'u4'),
('OffsetTimeStamps', 'u4'),
('OffsetEventList', 'u4'),
('OffsetRoi', 'u4'),
('OffsetBleachRoi', 'u4'),
('OffsetNextRecording', 'u4'),
# LSM 2.0 ends here
('DisplayAspectX', 'f8'),
('DisplayAspectY', 'f8'),
('DisplayAspectZ', 'f8'),
('DisplayAspectTime', 'f8'),
('OffsetMeanOfRoisOverlay', 'u4'),
('OffsetTopoIsolineOverlay', 'u4'),
('OffsetTopoProfileOverlay', 'u4'),
('OffsetLinescanOverlay', 'u4'),
('ToolbarFlags', 'u4'),
('OffsetChannelWavelength', 'u4'),
('OffsetChannelFactors', 'u4'),
('ObjectiveSphereCorrection', 'f8'),
('OffsetUnmixParameters', 'u4'),
# LSM 3.2, 4.0 end here
('OffsetAcquisitionParameters', 'u4'),
('OffsetCharacteristics', 'u4'),
('OffsetPalette', 'u4'),
('TimeDifferenceX', 'f8'),
('TimeDifferenceY', 'f8'),
('TimeDifferenceZ', 'f8'),
('InternalUse1', 'u4'),
('DimensionP', 'i4'),
('DimensionM', 'i4'),
('DimensionsReserved', '16i4'),
('OffsetTilePositions', 'u4'),
('', '9u4'), # Reserved
('OffsetPositions', 'u4'),
# ('', '21u4'), # must be 0
]
def CZ_LSMINFO_READERS():
# Import functions for CZ_LSMINFO sub-records
# TODO: read more CZ_LSMINFO sub-records
return {
'ScanInformation': read_lsm_scaninfo,
'TimeStamps': read_lsm_timestamps,
'EventList': read_lsm_eventlist,
'ChannelColors': read_lsm_channelcolors,
'Positions': read_lsm_floatpairs,
'TilePositions': read_lsm_floatpairs,
'VectorOverlay': None,
'InputLut': None,
'OutputLut': None,
'TimeIntervall': None,
'ChannelDataTypes': None,
'KsData': None,
'Roi': None,
'BleachRoi': None,
'NextRecording': None,
'MeanOfRoisOverlay': None,
'TopoIsolineOverlay': None,
'TopoProfileOverlay': None,
'ChannelWavelength': None,
'SphereCorrection': None,
'ChannelFactors': None,
'UnmixParameters': None,
'AcquisitionParameters': None,
'Characteristics': None,
}
def CZ_LSMINFO_SCANTYPE():
# Map CZ_LSMINFO.ScanType to dimension order
return {
0: 'XYZCT', # 'Stack' normal x-y-z-scan
1: 'XYZCT', # 'Z-Scan' x-z-plane Y=1
2: 'XYZCT', # 'Line'
3: 'XYTCZ', # 'Time Series Plane' time series x-y XYCTZ ? Z=1
4: 'XYZTC', # 'Time Series z-Scan' time series x-z
5: 'XYTCZ', # 'Time Series Mean-of-ROIs'
6: 'XYZTC', # 'Time Series Stack' time series x-y-z
7: 'XYCTZ', # Spline Scan
8: 'XYCZT', # Spline Plane x-z
9: 'XYTCZ', # Time Series Spline Plane x-z
10: 'XYZCT', # 'Time Series Point' point mode
}
def CZ_LSMINFO_DIMENSIONS():
# Map dimension codes to CZ_LSMINFO attribute
return {
'X': 'DimensionX',
'Y': 'DimensionY',
'Z': 'DimensionZ',
'C': 'DimensionChannels',
'T': 'DimensionTime',
'P': 'DimensionP',
'M': 'DimensionM',
}
def CZ_LSMINFO_DATATYPES():
# Description of CZ_LSMINFO.DataType
return {
0: 'varying data types',
1: '8 bit unsigned integer',
2: '12 bit unsigned integer',
5: '32 bit float',
}
def CZ_LSMINFO_TYPEOFDATA():
# Description of CZ_LSMINFO.TypeOfData
return {
0: 'Original scan data',
1: 'Calculated data',
2: '3D reconstruction',
3: 'Topography height map',
}
def CZ_LSMINFO_SCANINFO_ARRAYS():
return {
0x20000000: 'Tracks',
0x30000000: 'Lasers',
0x60000000: 'DetectionChannels',
0x80000000: 'IlluminationChannels',
0xa0000000: 'BeamSplitters',
0xc0000000: 'DataChannels',
0x11000000: 'Timers',
0x13000000: 'Markers',
}
def CZ_LSMINFO_SCANINFO_STRUCTS():
return {
# 0x10000000: 'Recording',
0x40000000: 'Track',
0x50000000: 'Laser',
0x70000000: 'DetectionChannel',
0x90000000: 'IlluminationChannel',
0xb0000000: 'BeamSplitter',
0xd0000000: 'DataChannel',
0x12000000: 'Timer',
0x14000000: 'Marker',
}
def CZ_LSMINFO_SCANINFO_ATTRIBUTES():
return {
# Recording
0x10000001: 'Name',
0x10000002: 'Description',
0x10000003: 'Notes',
0x10000004: 'Objective',
0x10000005: 'ProcessingSummary',
0x10000006: 'SpecialScanMode',
0x10000007: 'ScanType',
0x10000008: 'ScanMode',
0x10000009: 'NumberOfStacks',
0x1000000a: 'LinesPerPlane',
0x1000000b: 'SamplesPerLine',
0x1000000c: 'PlanesPerVolume',
0x1000000d: 'ImagesWidth',
0x1000000e: 'ImagesHeight',
0x1000000f: 'ImagesNumberPlanes',
0x10000010: 'ImagesNumberStacks',
0x10000011: 'ImagesNumberChannels',
0x10000012: 'LinscanXySize',
0x10000013: 'ScanDirection',
0x10000014: 'TimeSeries',
0x10000015: 'OriginalScanData',
0x10000016: 'ZoomX',
0x10000017: 'ZoomY',
0x10000018: 'ZoomZ',
0x10000019: 'Sample0X',
0x1000001a: 'Sample0Y',
0x1000001b: 'Sample0Z',
0x1000001c: 'SampleSpacing',
0x1000001d: 'LineSpacing',
0x1000001e: 'PlaneSpacing',
0x1000001f: 'PlaneWidth',
0x10000020: 'PlaneHeight',
0x10000021: 'VolumeDepth',
0x10000023: 'Nutation',
0x10000034: 'Rotation',
0x10000035: 'Precession',
0x10000036: 'Sample0time',
0x10000037: 'StartScanTriggerIn',
0x10000038: 'StartScanTriggerOut',
0x10000039: 'StartScanEvent',
0x10000040: 'StartScanTime',
0x10000041: 'StopScanTriggerIn',
0x10000042: 'StopScanTriggerOut',
0x10000043: 'StopScanEvent',
0x10000044: 'StopScanTime',
0x10000045: 'UseRois',
0x10000046: 'UseReducedMemoryRois',
0x10000047: 'User',
0x10000048: 'UseBcCorrection',
0x10000049: 'PositionBcCorrection1',
0x10000050: 'PositionBcCorrection2',
0x10000051: 'InterpolationY',
0x10000052: 'CameraBinning',
0x10000053: 'CameraSupersampling',
0x10000054: 'CameraFrameWidth',
0x10000055: 'CameraFrameHeight',
0x10000056: 'CameraOffsetX',
0x10000057: 'CameraOffsetY',
0x10000059: 'RtBinning',
0x1000005a: 'RtFrameWidth',
0x1000005b: 'RtFrameHeight',
0x1000005c: 'RtRegionWidth',
0x1000005d: 'RtRegionHeight',
0x1000005e: 'RtOffsetX',
0x1000005f: 'RtOffsetY',
0x10000060: 'RtZoom',
0x10000061: 'RtLinePeriod',
0x10000062: 'Prescan',
0x10000063: 'ScanDirectionZ',
# Track
0x40000001: 'MultiplexType', # 0 After Line; 1 After Frame
0x40000002: 'MultiplexOrder',
0x40000003: 'SamplingMode', # 0 Sample; 1 Line Avg; 2 Frame Avg
0x40000004: 'SamplingMethod', # 1 Mean; 2 Sum
0x40000005: 'SamplingNumber',
0x40000006: 'Acquire',
0x40000007: 'SampleObservationTime',
0x4000000b: 'TimeBetweenStacks',
0x4000000c: 'Name',
0x4000000d: 'Collimator1Name',
0x4000000e: 'Collimator1Position',
0x4000000f: 'Collimator2Name',
0x40000010: 'Collimator2Position',
0x40000011: 'IsBleachTrack',
0x40000012: 'IsBleachAfterScanNumber',
0x40000013: 'BleachScanNumber',
0x40000014: 'TriggerIn',
0x40000015: 'TriggerOut',
0x40000016: 'IsRatioTrack',
0x40000017: 'BleachCount',
0x40000018: 'SpiCenterWavelength',
0x40000019: 'PixelTime',
0x40000021: 'CondensorFrontlens',
0x40000023: 'FieldStopValue',
0x40000024: 'IdCondensorAperture',
0x40000025: 'CondensorAperture',
0x40000026: 'IdCondensorRevolver',
0x40000027: 'CondensorFilter',
0x40000028: 'IdTransmissionFilter1',
0x40000029: 'IdTransmission1',
0x40000030: 'IdTransmissionFilter2',
0x40000031: 'IdTransmission2',
0x40000032: 'RepeatBleach',
0x40000033: 'EnableSpotBleachPos',
0x40000034: 'SpotBleachPosx',
0x40000035: 'SpotBleachPosy',
0x40000036: 'SpotBleachPosz',
0x40000037: 'IdTubelens',
0x40000038: 'IdTubelensPosition',
0x40000039: 'TransmittedLight',
0x4000003a: 'ReflectedLight',
0x4000003b: 'SimultanGrabAndBleach',
0x4000003c: 'BleachPixelTime',
# Laser
0x50000001: 'Name',
0x50000002: 'Acquire',
0x50000003: 'Power',
# DetectionChannel
0x70000001: 'IntegrationMode',
0x70000002: 'SpecialMode',
0x70000003: 'DetectorGainFirst',
0x70000004: 'DetectorGainLast',
0x70000005: 'AmplifierGainFirst',
0x70000006: 'AmplifierGainLast',
0x70000007: 'AmplifierOffsFirst',
0x70000008: 'AmplifierOffsLast',
0x70000009: 'PinholeDiameter',
0x7000000a: 'CountingTrigger',
0x7000000b: 'Acquire',
0x7000000c: 'PointDetectorName',
0x7000000d: 'AmplifierName',
0x7000000e: 'PinholeName',
0x7000000f: 'FilterSetName',
0x70000010: 'FilterName',
0x70000013: 'IntegratorName',
0x70000014: 'ChannelName',
0x70000015: 'DetectorGainBc1',
0x70000016: 'DetectorGainBc2',
0x70000017: 'AmplifierGainBc1',
0x70000018: 'AmplifierGainBc2',
0x70000019: 'AmplifierOffsetBc1',
0x70000020: 'AmplifierOffsetBc2',
0x70000021: 'SpectralScanChannels',
0x70000022: 'SpiWavelengthStart',
0x70000023: 'SpiWavelengthStop',
0x70000026: 'DyeName',
0x70000027: 'DyeFolder',
# IlluminationChannel
0x90000001: 'Name',
0x90000002: 'Power',
0x90000003: 'Wavelength',
0x90000004: 'Aquire',
0x90000005: 'DetchannelName',
0x90000006: 'PowerBc1',
0x90000007: 'PowerBc2',
# BeamSplitter
0xb0000001: 'FilterSet',
0xb0000002: 'Filter',
0xb0000003: 'Name',
# DataChannel
0xd0000001: 'Name',
0xd0000003: 'Acquire',
0xd0000004: 'Color',
0xd0000005: 'SampleType',
0xd0000006: 'BitsPerSample',
0xd0000007: 'RatioType',
0xd0000008: 'RatioTrack1',
0xd0000009: 'RatioTrack2',
0xd000000a: 'RatioChannel1',
0xd000000b: 'RatioChannel2',
0xd000000c: 'RatioConst1',
0xd000000d: 'RatioConst2',
0xd000000e: 'RatioConst3',
0xd000000f: 'RatioConst4',
0xd0000010: 'RatioConst5',
0xd0000011: 'RatioConst6',
0xd0000012: 'RatioFirstImages1',
0xd0000013: 'RatioFirstImages2',
0xd0000014: 'DyeName',
0xd0000015: 'DyeFolder',
0xd0000016: 'Spectrum',
0xd0000017: 'Acquire',
# Timer
0x12000001: 'Name',
0x12000002: 'Description',
0x12000003: 'Interval',
0x12000004: 'TriggerIn',
0x12000005: 'TriggerOut',
0x12000006: 'ActivationTime',
0x12000007: 'ActivationNumber',
# Marker
0x14000001: 'Name',
0x14000002: 'Description',
0x14000003: 'TriggerIn',
0x14000004: 'TriggerOut',
}
def NIH_IMAGE_HEADER():
return [
('FileID', 'a8'),
('nLines', 'i2'),
('PixelsPerLine', 'i2'),
('Version', 'i2'),
('OldLutMode', 'i2'),
('OldnColors', 'i2'),
('Colors', 'u1', (3, 32)),
('OldColorStart', 'i2'),
('ColorWidth', 'i2'),
('ExtraColors', 'u2', (6, 3)),
('nExtraColors', 'i2'),
('ForegroundIndex', 'i2'),
('BackgroundIndex', 'i2'),
('XScale', 'f8'),
('Unused2', 'i2'),
('Unused3', 'i2'),
('UnitsID', 'i2'), # NIH_UNITS_TYPE
('p1', [('x', 'i2'), ('y', 'i2')]),
('p2', [('x', 'i2'), ('y', 'i2')]),
('CurveFitType', 'i2'), # NIH_CURVEFIT_TYPE
('nCoefficients', 'i2'),
('Coeff', 'f8', 6),
('UMsize', 'u1'),
('UM', 'a15'),
('UnusedBoolean', 'u1'),
('BinaryPic', 'b1'),
('SliceStart', 'i2'),
('SliceEnd', 'i2'),
('ScaleMagnification', 'f4'),
('nSlices', 'i2'),
('SliceSpacing', 'f4'),
('CurrentSlice', 'i2'),
('FrameInterval', 'f4'),
('PixelAspectRatio', 'f4'),
('ColorStart', 'i2'),
('ColorEnd', 'i2'),
('nColors', 'i2'),
('Fill1', '3u2'),
('Fill2', '3u2'),
('Table', 'u1'), # NIH_COLORTABLE_TYPE
('LutMode', 'u1'), # NIH_LUTMODE_TYPE
('InvertedTable', 'b1'),
('ZeroClip', 'b1'),
('XUnitSize', 'u1'),
('XUnit', 'a11'),
('StackType', 'i2'), # NIH_STACKTYPE_TYPE
# ('UnusedBytes', 'u1', 200)
]
def NIH_COLORTABLE_TYPE():
return ('CustomTable', 'AppleDefault', 'Pseudo20', 'Pseudo32',
'Rainbow', 'Fire1', 'Fire2', 'Ice', 'Grays', 'Spectrum')
def NIH_LUTMODE_TYPE():
return ('PseudoColor', 'OldAppleDefault', 'OldSpectrum', 'GrayScale',
'ColorLut', 'CustomGrayscale')
def NIH_CURVEFIT_TYPE():
return ('StraightLine', 'Poly2', 'Poly3', 'Poly4', 'Poly5', 'ExpoFit',
'PowerFit', 'LogFit', 'RodbardFit', 'SpareFit1',
'Uncalibrated', 'UncalibratedOD')
def NIH_UNITS_TYPE():
return ('Nanometers', 'Micrometers', 'Millimeters', 'Centimeters',
'Meters', 'Kilometers', 'Inches', 'Feet', 'Miles', 'Pixels',
'OtherUnits')
def NIH_STACKTYPE_TYPE():
return ('VolumeStack', 'RGBStack', 'MovieStack', 'HSVStack')
def TVIPS_HEADER_V1():
# TVIPS TemData structure from EMMENU Help file
return [
('Version', 'i4'),
('CommentV1', 'a80'),
('HighTension', 'i4'),
('SphericalAberration', 'i4'),
('IlluminationAperture', 'i4'),
('Magnification', 'i4'),
('PostMagnification', 'i4'),
('FocalLength', 'i4'),
('Defocus', 'i4'),
('Astigmatism', 'i4'),
('AstigmatismDirection', 'i4'),
('BiprismVoltage', 'i4'),
('SpecimenTiltAngle', 'i4'),
('SpecimenTiltDirection', 'i4'),
('IlluminationTiltDirection', 'i4'),
('IlluminationTiltAngle', 'i4'),
('ImageMode', 'i4'),
('EnergySpread', 'i4'),
('ChromaticAberration', 'i4'),
('ShutterType', 'i4'),
('DefocusSpread', 'i4'),
('CcdNumber', 'i4'),
('CcdSize', 'i4'),
('OffsetXV1', 'i4'),
('OffsetYV1', 'i4'),
('PhysicalPixelSize', 'i4'),
('Binning', 'i4'),
('ReadoutSpeed', 'i4'),
('GainV1', 'i4'),
('SensitivityV1', 'i4'),
('ExposureTimeV1', 'i4'),
('FlatCorrected', 'i4'),
('DeadPxCorrected', 'i4'),
('ImageMean', 'i4'),
('ImageStd', 'i4'),
('DisplacementX', 'i4'),
('DisplacementY', 'i4'),
('DateV1', 'i4'),
('TimeV1', 'i4'),
('ImageMin', 'i4'),
('ImageMax', 'i4'),
('ImageStatisticsQuality', 'i4'),
]
def TVIPS_HEADER_V2():
return [
('ImageName', 'V160'), # utf16
('ImageFolder', 'V160'),
('ImageSizeX', 'i4'),
('ImageSizeY', 'i4'),
('ImageSizeZ', 'i4'),
('ImageSizeE', 'i4'),
('ImageDataType', 'i4'),
('Date', 'i4'),
('Time', 'i4'),
('Comment', 'V1024'),
('ImageHistory', 'V1024'),
('Scaling', '16f4'),
('ImageStatistics', '16c16'),
('ImageType', 'i4'),
('ImageDisplaType', 'i4'),
('PixelSizeX', 'f4'), # distance between two px in x, [nm]
('PixelSizeY', 'f4'), # distance between two px in y, [nm]
('ImageDistanceZ', 'f4'),
('ImageDistanceE', 'f4'),
('ImageMisc', '32f4'),
('TemType', 'V160'),
('TemHighTension', 'f4'),
('TemAberrations', '32f4'),
('TemEnergy', '32f4'),
('TemMode', 'i4'),
('TemMagnification', 'f4'),
('TemMagnificationCorrection', 'f4'),
('PostMagnification', 'f4'),
('TemStageType', 'i4'),
('TemStagePosition', '5f4'), # x, y, z, a, b
('TemImageShift', '2f4'),
('TemBeamShift', '2f4'),
('TemBeamTilt', '2f4'),
('TilingParameters', '7f4'), # 0: tiling? 1:x 2:y 3: max x
# 4: max y 5: overlap x 6: overlap y
('TemIllumination', '3f4'), # 0: spotsize 1: intensity
('TemShutter', 'i4'),
('TemMisc', '32f4'),
('CameraType', 'V160'),
('PhysicalPixelSizeX', 'f4'),
('PhysicalPixelSizeY', 'f4'),
('OffsetX', 'i4'),
('OffsetY', 'i4'),
('BinningX', 'i4'),
('BinningY', 'i4'),
('ExposureTime', 'f4'),
('Gain', 'f4'),
('ReadoutRate', 'f4'),
('FlatfieldDescription', 'V160'),
('Sensitivity', 'f4'),
('Dose', 'f4'),
('CamMisc', '32f4'),
('FeiMicroscopeInformation', 'V1024'),
('FeiSpecimenInformation', 'V1024'),
('Magic', 'u4'),
]
def MM_HEADER():
# Olympus FluoView MM_Header
MM_DIMENSION = [
('Name', 'a16'),
('Size', 'i4'),
('Origin', 'f8'),
('Resolution', 'f8'),
('Unit', 'a64')]
return [
('HeaderFlag', 'i2'),
('ImageType', 'u1'),
('ImageName', 'a257'),
('OffsetData', 'u4'),
('PaletteSize', 'i4'),
('OffsetPalette0', 'u4'),
('OffsetPalette1', 'u4'),
('CommentSize', 'i4'),
('OffsetComment', 'u4'),
('Dimensions', MM_DIMENSION, 10),
('OffsetPosition', 'u4'),
('MapType', 'i2'),
('MapMin', 'f8'),
('MapMax', 'f8'),
('MinValue', 'f8'),
('MaxValue', 'f8'),
('OffsetMap', 'u4'),
('Gamma', 'f8'),
('Offset', 'f8'),
('GrayChannel', MM_DIMENSION),
('OffsetThumbnail', 'u4'),
('VoiceField', 'i4'),
('OffsetVoiceField', 'u4'),
]
def MM_DIMENSIONS():
# Map FluoView MM_Header.Dimensions to axes characters
return {
'X': 'X',
'Y': 'Y',
'Z': 'Z',
'T': 'T',
'CH': 'C',
'WAVELENGTH': 'C',
'TIME': 'T',
'XY': 'R',
'EVENT': 'V',
'EXPOSURE': 'L',
}
def UIC_TAGS():
# Map Universal Imaging Corporation MetaMorph internal tag ids to
# name and type
from fractions import Fraction # delayed import
return [
('AutoScale', int),
('MinScale', int),
('MaxScale', int),
('SpatialCalibration', int),
('XCalibration', Fraction),
('YCalibration', Fraction),
('CalibrationUnits', str),
('Name', str),
('ThreshState', int),
('ThreshStateRed', int),
('tagid_10', None), # undefined
('ThreshStateGreen', int),
('ThreshStateBlue', int),
('ThreshStateLo', int),
('ThreshStateHi', int),
('Zoom', int),
('CreateTime', julian_datetime),
('LastSavedTime', julian_datetime),
('currentBuffer', int),
('grayFit', None),
('grayPointCount', None),
('grayX', Fraction),
('grayY', Fraction),
('grayMin', Fraction),
('grayMax', Fraction),
('grayUnitName', str),
('StandardLUT', int),
('wavelength', int),
('StagePosition', '(%i,2,2)u4'), # N xy positions as fract
('CameraChipOffset', '(%i,2,2)u4'), # N xy offsets as fract
('OverlayMask', None),
('OverlayCompress', None),
('Overlay', None),
('SpecialOverlayMask', None),
('SpecialOverlayCompress', None),
('SpecialOverlay', None),
('ImageProperty', read_uic_image_property),
('StageLabel', '%ip'), # N str
('AutoScaleLoInfo', Fraction),
('AutoScaleHiInfo', Fraction),
('AbsoluteZ', '(%i,2)u4'), # N fractions
('AbsoluteZValid', '(%i,)u4'), # N long
('Gamma', 'I'), # 'I' uses offset
('GammaRed', 'I'),
('GammaGreen', 'I'),
('GammaBlue', 'I'),
('CameraBin', '2I'),
('NewLUT', int),
('ImagePropertyEx', None),
('PlaneProperty', int),
('UserLutTable', '(256,3)u1'),
('RedAutoScaleInfo', int),
('RedAutoScaleLoInfo', Fraction),
('RedAutoScaleHiInfo', Fraction),
('RedMinScaleInfo', int),
('RedMaxScaleInfo', int),
('GreenAutoScaleInfo', int),
('GreenAutoScaleLoInfo', Fraction),
('GreenAutoScaleHiInfo', Fraction),
('GreenMinScaleInfo', int),
('GreenMaxScaleInfo', int),
('BlueAutoScaleInfo', int),
('BlueAutoScaleLoInfo', Fraction),
('BlueAutoScaleHiInfo', Fraction),
('BlueMinScaleInfo', int),
('BlueMaxScaleInfo', int),
# ('OverlayPlaneColor', read_uic_overlay_plane_color),
]
def PILATUS_HEADER():
# PILATUS CBF Header Specification, Version 1.4
# Map key to [value_indices], type
return {
'Detector': ([slice(1, None)], str),
'Pixel_size': ([1, 4], float),
'Silicon': ([3], float),
'Exposure_time': ([1], float),
'Exposure_period': ([1], float),
'Tau': ([1], float),
'Count_cutoff': ([1], int),
'Threshold_setting': ([1], float),
'Gain_setting': ([1, 2], str),
'N_excluded_pixels': ([1], int),
'Excluded_pixels': ([1], str),
'Flat_field': ([1], str),
'Trim_file': ([1], str),
'Image_path': ([1], str),
# optional
'Wavelength': ([1], float),
'Energy_range': ([1, 2], float),
'Detector_distance': ([1], float),
'Detector_Voffset': ([1], float),
'Beam_xy': ([1, 2], float),
'Flux': ([1], str),
'Filter_transmission': ([1], float),
'Start_angle': ([1], float),
'Angle_increment': ([1], float),
'Detector_2theta': ([1], float),
'Polarization': ([1], float),
'Alpha': ([1], float),
'Kappa': ([1], float),
'Phi': ([1], float),
'Phi_increment': ([1], float),
'Chi': ([1], float),
'Chi_increment': ([1], float),
'Oscillation_axis': ([slice(1, None)], str),
'N_oscillations': ([1], int),
'Start_position': ([1], float),
'Position_increment': ([1], float),
'Shutter_time': ([1], float),
'Omega': ([1], float),
'Omega_increment': ([1], float)
}
def ALLOCATIONGRANULARITY():
# alignment for writing contiguous data to TIFF
import mmap # delayed import
return mmap.ALLOCATIONGRANULARITY
def read_tags(fh, byteorder, offsetsize, tagnames, customtags=None,
maxifds=None):
"""Read tags from chain of IFDs and return as list of dicts.
The file handle position must be at a valid IFD header.
"""
if offsetsize == 4:
offsetformat = byteorder+'I'
tagnosize = 2
tagnoformat = byteorder+'H'
tagsize = 12
tagformat1 = byteorder+'HH'
tagformat2 = byteorder+'I4s'
elif offsetsize == 8:
offsetformat = byteorder+'Q'
tagnosize = 8
tagnoformat = byteorder+'Q'
tagsize = 20
tagformat1 = byteorder+'HH'
tagformat2 = byteorder+'Q8s'
else:
raise ValueError('invalid offset size')
if customtags is None:
customtags = {}
if maxifds is None:
maxifds = 2**32
result = []
unpack = struct.unpack
offset = fh.tell()
while len(result) < maxifds:
# loop over IFDs
try:
tagno = unpack(tagnoformat, fh.read(tagnosize))[0]
if tagno > 4096:
raise ValueError('suspicious number of tags')
except Exception:
logging.warning(
'read_tags: corrupted tag list at offset %i', offset)
break
tags = {}
data = fh.read(tagsize * tagno)
pos = fh.tell()
index = 0
for _ in range(tagno):
code, type_ = unpack(tagformat1, data[index:index+4])
count, value = unpack(tagformat2, data[index+4:index+tagsize])
index += tagsize
name = tagnames.get(code, str(code))
try:
dtype = TIFF.DATA_FORMATS[type_]
except KeyError:
raise TiffTag.Error('unknown tag data type %i' % type_)
fmt = '%s%i%s' % (byteorder, count * int(dtype[0]), dtype[1])
size = struct.calcsize(fmt)
if size > offsetsize or code in customtags:
offset = unpack(offsetformat, value)[0]
if offset < 8 or offset > fh.size - size:
raise TiffTag.Error('invalid tag value offset %i' % offset)
fh.seek(offset)
if code in customtags:
readfunc = customtags[code][1]
value = readfunc(fh, byteorder, dtype, count, offsetsize)
elif type_ == 7 or (count > 1 and dtype[-1] == 'B'):
value = read_bytes(fh, byteorder, dtype, count, offsetsize)
elif code in tagnames or dtype[-1] == 's':
value = unpack(fmt, fh.read(size))
else:
value = read_numpy(fh, byteorder, dtype, count, offsetsize)
elif dtype[-1] == 'B' or type_ == 7:
value = value[:size]
else:
value = unpack(fmt, value[:size])
if code not in customtags and code not in TIFF.TAG_TUPLE:
if len(value) == 1:
value = value[0]
if type_ != 7 and dtype[-1] == 's' and isinstance(value, bytes):
# TIFF ASCII fields can contain multiple strings,
# each terminated with a NUL
try:
value = bytes2str(stripascii(value).strip())
except UnicodeDecodeError:
logging.warning(
'read_tags: coercing invalid ASCII to bytes (tag %i)',
code)
tags[name] = value
result.append(tags)
# read offset to next page
fh.seek(pos)
offset = unpack(offsetformat, fh.read(offsetsize))[0]
if offset == 0:
break
if offset >= fh.size:
logging.warning('read_tags: invalid page offset (%i)', offset)
break
fh.seek(offset)
if result and maxifds == 1:
result = result[0]
return result
def read_exif_ifd(fh, byteorder, dtype, count, offsetsize):
"""Read EXIF tags from file and return as dict."""
exif = read_tags(fh, byteorder, offsetsize, TIFF.EXIF_TAGS, maxifds=1)
for name in ('ExifVersion', 'FlashpixVersion'):
try:
exif[name] = bytes2str(exif[name])
except Exception:
pass
if 'UserComment' in exif:
idcode = exif['UserComment'][:8]
try:
if idcode == b'ASCII\x00\x00\x00':
exif['UserComment'] = bytes2str(exif['UserComment'][8:])
elif idcode == b'UNICODE\x00':
exif['UserComment'] = exif['UserComment'][8:].decode('utf-16')
except Exception:
pass
return exif
def read_gps_ifd(fh, byteorder, dtype, count, offsetsize):
"""Read GPS tags from file and return as dict."""
return read_tags(fh, byteorder, offsetsize, TIFF.GPS_TAGS, maxifds=1)
def read_interoperability_ifd(fh, byteorder, dtype, count, offsetsize):
"""Read Interoperability tags from file and return as dict."""
tag_names = {1: 'InteroperabilityIndex'}
return read_tags(fh, byteorder, offsetsize, tag_names, maxifds=1)
def read_bytes(fh, byteorder, dtype, count, offsetsize):
"""Read tag data from file and return as byte string."""
dtype = 'B' if dtype[-1] == 's' else byteorder+dtype[-1]
count *= numpy.dtype(dtype).itemsize
data = fh.read(count)
if len(data) != count:
logging.warning('read_bytes: failed to read all bytes (%i < %i)',
len(data), count)
return data
def read_utf8(fh, byteorder, dtype, count, offsetsize):
"""Read tag data from file and return as unicode string."""
return fh.read(count).decode('utf-8')
def read_numpy(fh, byteorder, dtype, count, offsetsize):
"""Read tag data from file and return as numpy array."""
dtype = 'b' if dtype[-1] == 's' else byteorder+dtype[-1]
return fh.read_array(dtype, count)
def read_colormap(fh, byteorder, dtype, count, offsetsize):
"""Read ColorMap data from file and return as numpy array."""
cmap = fh.read_array(byteorder+dtype[-1], count)
cmap.shape = (3, -1)
return cmap
def read_json(fh, byteorder, dtype, count, offsetsize):
"""Read JSON tag data from file and return as object."""
data = fh.read(count)
try:
return json.loads(unicode(stripnull(data), 'utf-8'))
except ValueError:
logging.warning('read_json: invalid JSON')
def read_mm_header(fh, byteorder, dtype, count, offsetsize):
"""Read FluoView mm_header tag from file and return as dict."""
mmh = fh.read_record(TIFF.MM_HEADER, byteorder=byteorder)
mmh = recarray2dict(mmh)
mmh['Dimensions'] = [
(bytes2str(d[0]).strip(), d[1], d[2], d[3], bytes2str(d[4]).strip())
for d in mmh['Dimensions']]
d = mmh['GrayChannel']
mmh['GrayChannel'] = (
bytes2str(d[0]).strip(), d[1], d[2], d[3], bytes2str(d[4]).strip())
return mmh
def read_mm_stamp(fh, byteorder, dtype, count, offsetsize):
"""Read FluoView mm_stamp tag from file and return as numpy.ndarray."""
return fh.read_array(byteorder+'f8', 8)
def read_uic1tag(fh, byteorder, dtype, count, offsetsize, planecount=None):
"""Read MetaMorph STK UIC1Tag from file and return as dict.
Return empty dictionary if planecount is unknown.
"""
assert dtype in ('2I', '1I') and byteorder == '<'
result = {}
if dtype == '2I':
# pre MetaMorph 2.5 (not tested)
values = fh.read_array(' structure_size:
break
lsminfo.append((name, dtype))
else:
lsminfo = TIFF.CZ_LSMINFO
lsminfo = fh.read_record(lsminfo, byteorder=byteorder)
lsminfo = recarray2dict(lsminfo)
# read LSM info subrecords at offsets
for name, reader in TIFF.CZ_LSMINFO_READERS.items():
if reader is None:
continue
offset = lsminfo.get('Offset' + name, 0)
if offset < 8:
continue
fh.seek(offset)
try:
lsminfo[name] = reader(fh)
except ValueError:
pass
return lsminfo
def read_lsm_floatpairs(fh):
"""Read LSM sequence of float pairs from file and return as list."""
size = struct.unpack(' 0:
esize, etime, etype = struct.unpack(' 4:
size = struct.unpack(' 1 else {}
return frame_data, roi_data
def read_micromanager_metadata(fh):
"""Read MicroManager non-TIFF settings from open file and return as dict.
The settings can be used to read image data without parsing the TIFF file.
Raise ValueError if the file does not contain valid MicroManager metadata.
"""
fh.seek(0)
try:
byteorder = {b'II': '<', b'MM': '>'}[fh.read(2)]
except IndexError:
raise ValueError('not a MicroManager TIFF file')
result = {}
fh.seek(8)
(index_header, index_offset, display_header, display_offset,
comments_header, comments_offset, summary_header, summary_length
) = struct.unpack(byteorder + 'IIIIIIII', fh.read(32))
if summary_header != 2355492:
raise ValueError('invalid MicroManager summary header')
result['Summary'] = read_json(fh, byteorder, None, summary_length, None)
if index_header != 54773648:
raise ValueError('invalid MicroManager index header')
fh.seek(index_offset)
header, count = struct.unpack(byteorder + 'II', fh.read(8))
if header != 3453623:
raise ValueError('invalid MicroManager index header')
data = struct.unpack(byteorder + 'IIIII'*count, fh.read(20*count))
result['IndexMap'] = {'Channel': data[::5],
'Slice': data[1::5],
'Frame': data[2::5],
'Position': data[3::5],
'Offset': data[4::5]}
if display_header != 483765892:
raise ValueError('invalid MicroManager display header')
fh.seek(display_offset)
header, count = struct.unpack(byteorder + 'II', fh.read(8))
if header != 347834724:
raise ValueError('invalid MicroManager display header')
result['DisplaySettings'] = read_json(fh, byteorder, None, count, None)
if comments_header != 99384722:
raise ValueError('invalid MicroManager comments header')
fh.seek(comments_offset)
header, count = struct.unpack(byteorder + 'II', fh.read(8))
if header != 84720485:
raise ValueError('invalid MicroManager comments header')
result['Comments'] = read_json(fh, byteorder, None, count, None)
return result
def read_metaseries_catalog(fh):
"""Read MetaSeries non-TIFF hint catalog from file.
Raise ValueError if the file does not contain a valid hint catalog.
"""
# TODO: implement read_metaseries_catalog
raise NotImplementedError()
def imagej_metadata_tag(metadata, byteorder):
"""Return IJMetadata and IJMetadataByteCounts tags from metadata dict.
The tags can be passed to the TiffWriter.save function as extratags.
The metadata dict may contain the following keys and values:
Info : str
Human-readable information as string.
Labels : sequence of str
Human-readable labels for each channel.
Ranges : sequence of doubles
Lower and upper values for each channel.
LUTs : sequence of (3, 256) uint8 ndarrays
Color palettes for each channel.
Plot : bytes
Undocumented ImageJ internal format.
ROI: bytes
Undocumented ImageJ internal region of interest format.
Overlays : bytes
Undocumented ImageJ internal format.
"""
header = [{'>': b'IJIJ', '<': b'JIJI'}[byteorder]]
bytecounts = [0]
body = []
def _string(data, byteorder):
return data.encode('utf-16' + {'>': 'be', '<': 'le'}[byteorder])
def _doubles(data, byteorder):
return struct.pack(byteorder+('d' * len(data)), *data)
def _ndarray(data, byteorder):
return data.tobytes()
def _bytes(data, byteorder):
return data
metadata_types = (
('Info', b'info', 1, _string),
('Labels', b'labl', None, _string),
('Ranges', b'rang', 1, _doubles),
('LUTs', b'luts', None, _ndarray),
('Plot', b'plot', 1, _bytes),
('ROI', b'roi ', 1, _bytes),
('Overlays', b'over', None, _bytes))
for key, mtype, count, func in metadata_types:
if key.lower() in metadata:
key = key.lower()
elif key not in metadata:
continue
if byteorder == '<':
mtype = mtype[::-1]
values = metadata[key]
if count is None:
count = len(values)
else:
values = [values]
header.append(mtype + struct.pack(byteorder+'I', count))
for value in values:
data = func(value, byteorder)
body.append(data)
bytecounts.append(len(data))
if not body:
return ()
body = b''.join(body)
header = b''.join(header)
data = header + body
bytecounts[0] = len(header)
bytecounts = struct.pack(byteorder+('I' * len(bytecounts)), *bytecounts)
return ((50839, 'B', len(data), data, True),
(50838, 'I', len(bytecounts)//4, bytecounts, True))
def imagej_metadata(data, bytecounts, byteorder):
"""Return IJMetadata tag value as dict.
The 'Info' string can have multiple formats, e.g. OIF or ScanImage,
that might be parsed into dicts using the matlabstr2py or
oiffile.SettingsFile functions.
"""
def _string(data, byteorder):
return data.decode('utf-16' + {'>': 'be', '<': 'le'}[byteorder])
def _doubles(data, byteorder):
return struct.unpack(byteorder+('d' * (len(data) // 8)), data)
def _lut(data, byteorder):
return numpy.frombuffer(data, 'uint8').reshape(-1, 256)
def _bytes(data, byteorder):
return data
metadata_types = { # big-endian
b'info': ('Info', _string),
b'labl': ('Labels', _string),
b'rang': ('Ranges', _doubles),
b'luts': ('LUTs', _lut),
b'plot': ('Plots', _bytes),
b'roi ': ('ROI', _bytes),
b'over': ('Overlays', _bytes)}
metadata_types.update( # little-endian
dict((k[::-1], v) for k, v in metadata_types.items()))
if not bytecounts:
raise ValueError('no ImageJ metadata')
if not data[:4] in (b'IJIJ', b'JIJI'):
raise ValueError('invalid ImageJ metadata')
header_size = bytecounts[0]
if header_size < 12 or header_size > 804:
raise ValueError('invalid ImageJ metadata header size')
ntypes = (header_size - 4) // 8
header = struct.unpack(byteorder+'4sI'*ntypes, data[4:4+ntypes*8])
pos = 4 + ntypes * 8
counter = 0
result = {}
for mtype, count in zip(header[::2], header[1::2]):
values = []
name, func = metadata_types.get(mtype, (bytes2str(mtype), read_bytes))
for _ in range(count):
counter += 1
pos1 = pos + bytecounts[counter]
values.append(func(data[pos:pos1], byteorder))
pos = pos1
result[name.strip()] = values[0] if count == 1 else values
return result
def imagej_description_metadata(description):
"""Return metatata from ImageJ image description as dict.
Raise ValueError if not a valid ImageJ description.
>>> description = 'ImageJ=1.11a\\nimages=510\\nhyperstack=true\\n'
>>> imagej_description_metadata(description) # doctest: +SKIP
{'ImageJ': '1.11a', 'images': 510, 'hyperstack': True}
"""
def _bool(val):
return {'true': True, 'false': False}[val.lower()]
result = {}
for line in description.splitlines():
try:
key, val = line.split('=')
except Exception:
continue
key = key.strip()
val = val.strip()
for dtype in (int, float, _bool):
try:
val = dtype(val)
break
except Exception:
pass
result[key] = val
if 'ImageJ' not in result:
raise ValueError('not a ImageJ image description')
return result
def imagej_description(shape, rgb=None, colormaped=False, version='1.11a',
hyperstack=None, mode=None, loop=None, **kwargs):
"""Return ImageJ image description from data shape.
ImageJ can handle up to 6 dimensions in order TZCYXS.
>>> imagej_description((51, 5, 2, 196, 171)) # doctest: +SKIP
ImageJ=1.11a
images=510
channels=2
slices=5
frames=51
hyperstack=true
mode=grayscale
loop=false
"""
if colormaped:
raise NotImplementedError('ImageJ colormapping not supported')
shape = imagej_shape(shape, rgb=rgb)
rgb = shape[-1] in (3, 4)
result = ['ImageJ=%s' % version]
append = []
result.append('images=%i' % product(shape[:-3]))
if hyperstack is None:
hyperstack = True
append.append('hyperstack=true')
else:
append.append('hyperstack=%s' % bool(hyperstack))
if shape[2] > 1:
result.append('channels=%i' % shape[2])
if mode is None and not rgb:
mode = 'grayscale'
if hyperstack and mode:
append.append('mode=%s' % mode)
if shape[1] > 1:
result.append('slices=%i' % shape[1])
if shape[0] > 1:
result.append('frames=%i' % shape[0])
if loop is None:
append.append('loop=false')
if loop is not None:
append.append('loop=%s' % bool(loop))
for key, value in kwargs.items():
append.append('%s=%s' % (key.lower(), value))
return '\n'.join(result + append + [''])
def imagej_shape(shape, rgb=None):
"""Return shape normalized to 6D ImageJ hyperstack TZCYXS.
Raise ValueError if not a valid ImageJ hyperstack shape.
>>> imagej_shape((2, 3, 4, 5, 3), False)
(2, 3, 4, 5, 3, 1)
"""
shape = tuple(int(i) for i in shape)
ndim = len(shape)
if 1 > ndim > 6:
raise ValueError('invalid ImageJ hyperstack: not 2 to 6 dimensional')
if rgb is None:
rgb = shape[-1] in (3, 4) and ndim > 2
if rgb and shape[-1] not in (3, 4):
raise ValueError('invalid ImageJ hyperstack: not a RGB image')
if not rgb and ndim == 6 and shape[-1] != 1:
raise ValueError('invalid ImageJ hyperstack: not a non-RGB image')
if rgb or shape[-1] == 1:
return (1, ) * (6 - ndim) + shape
return (1, ) * (5 - ndim) + shape + (1,)
def json_description(shape, **metadata):
"""Return JSON image description from data shape and other meta data.
Return UTF-8 encoded JSON.
>>> json_description((256, 256, 3), axes='YXS') # doctest: +SKIP
b'{"shape": [256, 256, 3], "axes": "YXS"}'
"""
metadata.update(shape=shape)
return json.dumps(metadata) # .encode('utf-8')
def json_description_metadata(description):
"""Return metatata from JSON formated image description as dict.
Raise ValuError if description is of unknown format.
>>> description = '{"shape": [256, 256, 3], "axes": "YXS"}'
>>> json_description_metadata(description) # doctest: +SKIP
{'shape': [256, 256, 3], 'axes': 'YXS'}
>>> json_description_metadata('shape=(256, 256, 3)')
{'shape': (256, 256, 3)}
"""
if description[:6] == 'shape=':
# old style 'shaped' description; not JSON
shape = tuple(int(i) for i in description[7:-1].split(','))
return dict(shape=shape)
if description[:1] == '{' and description[-1:] == '}':
# JSON description
return json.loads(description)
raise ValueError('invalid JSON image description', description)
def fluoview_description_metadata(description, ignoresections=None):
"""Return metatata from FluoView image description as dict.
The FluoView image description format is unspecified. Expect failures.
>>> descr = ('[Intensity Mapping]\\nMap Ch0: Range=00000 to 02047\\n'
... '[Intensity Mapping End]')
>>> fluoview_description_metadata(descr)
{'Intensity Mapping': {'Map Ch0: Range': '00000 to 02047'}}
"""
if not description.startswith('['):
raise ValueError('invalid FluoView image description')
if ignoresections is None:
ignoresections = {'Region Info (Fields)', 'Protocol Description'}
result = {}
sections = [result]
comment = False
for line in description.splitlines():
if not comment:
line = line.strip()
if not line:
continue
if line[0] == '[':
if line[-5:] == ' End]':
# close section
del sections[-1]
section = sections[-1]
name = line[1:-5]
if comment:
section[name] = '\n'.join(section[name])
if name[:4] == 'LUT ':
a = numpy.array(section[name], dtype='uint8')
a.shape = -1, 3
section[name] = a
continue
# new section
comment = False
name = line[1:-1]
if name[:4] == 'LUT ':
section = []
elif name in ignoresections:
section = []
comment = True
else:
section = {}
sections.append(section)
result[name] = section
continue
# add entry
if comment:
section.append(line)
continue
line = line.split('=', 1)
if len(line) == 1:
section[line[0].strip()] = None
continue
key, value = line
if key[:4] == 'RGB ':
section.extend(int(rgb) for rgb in value.split())
else:
section[key.strip()] = astype(value.strip())
return result
def pilatus_description_metadata(description):
"""Return metatata from Pilatus image description as dict.
Return metadata from Pilatus pixel array detectors by Dectris, created
by camserver or TVX software.
>>> pilatus_description_metadata('# Pixel_size 172e-6 m x 172e-6 m')
{'Pixel_size': (0.000172, 0.000172)}
"""
result = {}
if not description.startswith('# '):
return result
for c in '#:=,()':
description = description.replace(c, ' ')
for line in description.split('\n'):
if line[:2] != ' ':
continue
line = line.split()
name = line[0]
if line[0] not in TIFF.PILATUS_HEADER:
try:
result['DateTime'] = datetime.datetime.strptime(
' '.join(line), '%Y-%m-%dT%H %M %S.%f')
except Exception:
result[name] = ' '.join(line[1:])
continue
indices, dtype = TIFF.PILATUS_HEADER[line[0]]
if isinstance(indices[0], slice):
# assumes one slice
values = line[indices[0]]
else:
values = [line[i] for i in indices]
if dtype is float and values[0] == 'not':
values = ['NaN']
values = tuple(dtype(v) for v in values)
if dtype == str:
values = ' '.join(values)
elif len(values) == 1:
values = values[0]
result[name] = values
return result
def svs_description_metadata(description):
"""Return metatata from Aperio image description as dict.
The Aperio image description format is unspecified. Expect failures.
>>> svs_description_metadata('Aperio Image Library v1.0')
{'Aperio Image Library': 'v1.0'}
"""
if not description.startswith('Aperio Image Library '):
raise ValueError('invalid Aperio image description')
result = {}
lines = description.split('\n')
key, value = lines[0].strip().rsplit(None, 1) # 'Aperio Image Library'
result[key.strip()] = value.strip()
if len(lines) == 1:
return result
items = lines[1].split('|')
result[''] = items[0].strip() # TODO: parse this?
for item in items[1:]:
key, value = item.split(' = ')
result[key.strip()] = astype(value.strip())
return result
def stk_description_metadata(description):
"""Return metadata from MetaMorph image description as list of dict.
The MetaMorph image description format is unspecified. Expect failures.
"""
description = description.strip()
if not description:
return []
try:
description = bytes2str(description)
except UnicodeDecodeError as e:
logging.warning('stk_description_metadata: %s', str(e))
return []
result = []
for plane in description.split('\x00'):
d = {}
for line in plane.split('\r\n'):
line = line.split(':', 1)
if len(line) > 1:
name, value = line
d[name.strip()] = astype(value.strip())
else:
value = line[0].strip()
if value:
if '' in d:
d[''].append(value)
else:
d[''] = [value]
result.append(d)
return result
def metaseries_description_metadata(description):
"""Return metatata from MetaSeries image description as dict."""
if not description.startswith(''):
raise ValueError('invalid MetaSeries image description')
from xml.etree import cElementTree as etree # delayed import
root = etree.fromstring(description)
types = {'float': float, 'int': int,
'bool': lambda x: asbool(x, 'on', 'off')}
def parse(root, result):
# recursive
for child in root:
attrib = child.attrib
if not attrib:
result[child.tag] = parse(child, {})
continue
if 'id' in attrib:
i = attrib['id']
t = attrib['type']
v = attrib['value']
if t in types:
result[i] = types[t](v)
else:
result[i] = v
return result
adict = parse(root, {})
if 'Description' in adict:
adict['Description'] = adict['Description'].replace('
', '\n')
return adict
def scanimage_description_metadata(description):
"""Return metatata from ScanImage image description as dict."""
return matlabstr2py(description)
def scanimage_artist_metadata(artist):
"""Return metatata from ScanImage artist tag as dict."""
try:
return json.loads(artist)
except ValueError as e:
logging.warning('scanimage_artist_metadata: %s', str(e))
def olympusini_metadata(inistr):
"""Return OlympusSIS metadata from INI string.
No documentation is available.
"""
def keyindex(key):
# split key into name and index
index = 0
i = len(key.rstrip('0123456789'))
if i < len(key):
index = int(key[i:]) - 1
key = key[:i]
return key, index
result = {}
bands = []
zpos = None
tpos = None
for line in inistr.splitlines():
line = line.strip()
if line == '' or line[0] == ';':
continue
if line[0] == '[' and line[-1] == ']':
section_name = line[1:-1]
result[section_name] = section = {}
if section_name == 'Dimension':
result['axes'] = axes = []
result['shape'] = shape = []
elif section_name == 'ASD':
result[section_name] = []
elif section_name == 'Z':
if 'Dimension' in result:
result[section_name]['ZPos'] = zpos = []
elif section_name == 'Time':
if 'Dimension' in result:
result[section_name]['TimePos'] = tpos = []
elif section_name == 'Band':
nbands = result['Dimension']['Band']
bands = [{'LUT': []} for i in range(nbands)]
result[section_name] = bands
iband = 0
else:
key, value = line.split('=')
if value.strip() == '':
value = None
elif ',' in value:
value = tuple(astype(v) for v in value.split(','))
else:
value = astype(value)
if section_name == 'Dimension':
section[key] = value
axes.append(key)
shape.append(value)
elif section_name == 'ASD':
if key == 'Count':
result['ASD'] = [{}] * value
else:
key, index = keyindex(key)
result['ASD'][index][key] = value
elif section_name == 'Band':
if key[:3] == 'LUT':
lut = bands[iband]['LUT']
value = struct.pack(' 1:
axes.append(sisaxes.get(x, x[0].upper()))
shape.append(i)
result['axes'] = ''.join(axes)
result['shape'] = tuple(shape)
try:
result['Z']['ZPos'] = numpy.array(
result['Z']['ZPos'][:result['Dimension']['Z']], 'float64')
except Exception:
pass
try:
result['Time']['TimePos'] = numpy.array(
result['Time']['TimePos'][:result['Dimension']['Time']], 'int32')
except Exception:
pass
for band in bands:
band['LUT'] = numpy.array(band['LUT'], 'uint8')
return result
def unpack_rgb(data, dtype='>> data = struct.pack('BBBB', 0x21, 0x08, 0xff, 0xff)
>>> print(unpack_rgb(data, '>> print(unpack_rgb(data, '>> print(unpack_rgb(data, '= bits)
data = numpy.frombuffer(data, dtype.byteorder+dt)
result = numpy.empty((data.size, len(bitspersample)), dtype.char)
for i, bps in enumerate(bitspersample):
t = data >> int(numpy.sum(bitspersample[i+1:]))
t &= int('0b'+'1'*bps, 2)
if rescale:
o = ((dtype.itemsize * 8) // bps + 1) * bps
if o > data.dtype.itemsize * 8:
t = t.astype('I')
t *= (2**o - 1) // (2**bps - 1)
t //= 2**(o - (dtype.itemsize * 8))
result[:, i] = t
return result.reshape(-1)
def delta_encode(data, axis=-1, out=None):
"""Encode Delta."""
if isinstance(data, (bytes, bytearray)):
data = numpy.frombuffer(data, dtype='u1')
diff = numpy.diff(data, axis=0)
return numpy.insert(diff, 0, data[0]).tobytes()
dtype = data.dtype
if dtype.kind == 'f':
data = data.view('u%i' % dtype.itemsize)
diff = numpy.diff(data, axis=axis)
key = [slice(None)] * data.ndim
key[axis] = 0
diff = numpy.insert(diff, 0, data[tuple(key)], axis=axis)
if dtype.kind == 'f':
return diff.view(dtype)
return diff
def delta_decode(data, axis=-1, out=None):
"""Decode Delta."""
if out is not None and not out.flags.writeable:
out = None
if isinstance(data, (bytes, bytearray)):
data = numpy.frombuffer(data, dtype='u1')
return numpy.cumsum(data, axis=0, dtype='u1', out=out).tobytes()
if data.dtype.kind == 'f':
view = data.view('u%i' % data.dtype.itemsize)
view = numpy.cumsum(view, axis=axis, dtype=view.dtype)
return view.view(data.dtype)
return numpy.cumsum(data, axis=axis, dtype=data.dtype, out=out)
def bitorder_decode(data, out=None, _bitorder=[]):
"""Reverse bits in each byte of byte string or numpy array.
Decode data where pixels with lower column values are stored in the
lower-order bits of the bytes (TIFF FillOrder is LSB2MSB).
Parameters
----------
data : byte string or ndarray
The data to be bit reversed. If byte string, a new bit-reversed byte
string is returned. Numpy arrays are bit-reversed in-place.
Examples
--------
>>> bitorder_decode(b'\\x01\\x64')
b'\\x80&'
>>> data = numpy.array([1, 666], dtype='uint16')
>>> bitorder_decode(data)
>>> data
array([ 128, 16473], dtype=uint16)
"""
if not _bitorder:
_bitorder.append(
b'\x00\x80@\xc0 \xa0`\xe0\x10\x90P\xd00\xb0p\xf0\x08\x88H\xc8('
b'\xa8h\xe8\x18\x98X\xd88\xb8x\xf8\x04\x84D\xc4$\xa4d\xe4\x14'
b'\x94T\xd44\xb4t\xf4\x0c\x8cL\xcc,\xacl\xec\x1c\x9c\\\xdc<\xbc|'
b'\xfc\x02\x82B\xc2"\xa2b\xe2\x12\x92R\xd22\xb2r\xf2\n\x8aJ\xca*'
b'\xaaj\xea\x1a\x9aZ\xda:\xbaz\xfa\x06\x86F\xc6&\xa6f\xe6\x16'
b'\x96V\xd66\xb6v\xf6\x0e\x8eN\xce.\xaen\xee\x1e\x9e^\xde>\xbe~'
b'\xfe\x01\x81A\xc1!\xa1a\xe1\x11\x91Q\xd11\xb1q\xf1\t\x89I\xc9)'
b'\xa9i\xe9\x19\x99Y\xd99\xb9y\xf9\x05\x85E\xc5%\xa5e\xe5\x15'
b'\x95U\xd55\xb5u\xf5\r\x8dM\xcd-\xadm\xed\x1d\x9d]\xdd=\xbd}'
b'\xfd\x03\x83C\xc3#\xa3c\xe3\x13\x93S\xd33\xb3s\xf3\x0b\x8bK'
b'\xcb+\xabk\xeb\x1b\x9b[\xdb;\xbb{\xfb\x07\x87G\xc7\'\xa7g\xe7'
b'\x17\x97W\xd77\xb7w\xf7\x0f\x8fO\xcf/\xafo\xef\x1f\x9f_'
b'\xdf?\xbf\x7f\xff')
_bitorder.append(numpy.frombuffer(_bitorder[0], dtype='uint8'))
try:
view = data.view('uint8')
numpy.take(_bitorder[1], view, out=view)
return data
except AttributeError:
return data.translate(_bitorder[0])
except ValueError:
raise NotImplementedError('slices of arrays not supported')
return None
def packints_decode(data, dtype, numbits, runlen=0, out=None):
"""Decompress byte string to array of integers.
This implementation only handles itemsizes 1, 8, 16, 32, and 64 bits.
Install the imagecodecs package for decoding other integer sizes.
Parameters
----------
data : byte str
Data to decompress.
dtype : numpy.dtype or str
A numpy boolean or integer type.
numbits : int
Number of bits per integer.
runlen : int
Number of consecutive integers, after which to start at next byte.
Examples
--------
>>> packints_decode(b'a', 'B', 1)
array([0, 1, 1, 0, 0, 0, 0, 1], dtype=uint8)
"""
if numbits == 1: # bitarray
data = numpy.frombuffer(data, '|B')
data = numpy.unpackbits(data)
if runlen % 8:
data = data.reshape(-1, runlen + (8 - runlen % 8))
data = data[:, :runlen].reshape(-1)
return data.astype(dtype)
if numbits in (8, 16, 32, 64):
return numpy.frombuffer(data, dtype)
raise NotImplementedError('unpacking %s-bit integers to %s not supported'
% (numbits, numpy.dtype(dtype)))
if imagecodecs is not None:
bitorder_decode = imagecodecs.bitorder_decode # noqa
packints_decode = imagecodecs.packints_decode # noqa
def apply_colormap(image, colormap, contig=True):
"""Return palette-colored image.
The image values are used to index the colormap on axis 1. The returned
image is of shape image.shape+colormap.shape[0] and dtype colormap.dtype.
Parameters
----------
image : numpy.ndarray
Indexes into the colormap.
colormap : numpy.ndarray
RGB lookup table aka palette of shape (3, 2**bits_per_sample).
contig : bool
If True, return a contiguous array.
Examples
--------
>>> image = numpy.arange(256, dtype='uint8')
>>> colormap = numpy.vstack([image, image, image]).astype('uint16') * 256
>>> apply_colormap(image, colormap)[-1]
array([65280, 65280, 65280], dtype=uint16)
"""
image = numpy.take(colormap, image, axis=1)
image = numpy.rollaxis(image, 0, image.ndim)
if contig:
image = numpy.ascontiguousarray(image)
return image
def reorient(image, orientation):
"""Return reoriented view of image array.
Parameters
----------
image : numpy.ndarray
Non-squeezed output of asarray() functions.
Axes -3 and -2 must be image length and width respectively.
orientation : int or str
One of TIFF.ORIENTATION names or values.
"""
orient = TIFF.ORIENTATION
orientation = enumarg(orient, orientation)
if orientation == orient.TOPLEFT:
return image
if orientation == orient.TOPRIGHT:
return image[..., ::-1, :]
if orientation == orient.BOTLEFT:
return image[..., ::-1, :, :]
if orientation == orient.BOTRIGHT:
return image[..., ::-1, ::-1, :]
if orientation == orient.LEFTTOP:
return numpy.swapaxes(image, -3, -2)
if orientation == orient.RIGHTTOP:
return numpy.swapaxes(image, -3, -2)[..., ::-1, :]
if orientation == orient.RIGHTBOT:
return numpy.swapaxes(image, -3, -2)[..., ::-1, :, :]
if orientation == orient.LEFTBOT:
return numpy.swapaxes(image, -3, -2)[..., ::-1, ::-1, :]
return image
def repeat_nd(a, repeats):
"""Return read-only view into input array with elements repeated.
Zoom nD image by integer factors using nearest neighbor interpolation
(box filter).
Parameters
----------
a : array_like
Input array.
repeats : sequence of int
The number of repetitions to apply along each dimension of input array.
Examples
--------
>>> repeat_nd([[1, 2], [3, 4]], (2, 2))
array([[1, 1, 2, 2],
[1, 1, 2, 2],
[3, 3, 4, 4],
[3, 3, 4, 4]])
"""
a = numpy.asarray(a)
reshape = []
shape = []
strides = []
for i, j, k in zip(a.strides, a.shape, repeats):
shape.extend((j, k))
strides.extend((i, 0))
reshape.append(j * k)
return numpy.lib.stride_tricks.as_strided(
a, shape, strides, writeable=False).reshape(reshape)
def reshape_nd(data_or_shape, ndim):
"""Return image array or shape with at least ndim dimensions.
Prepend 1s to image shape as necessary.
>>> reshape_nd(numpy.empty(0), 1).shape
(0,)
>>> reshape_nd(numpy.empty(1), 2).shape
(1, 1)
>>> reshape_nd(numpy.empty((2, 3)), 3).shape
(1, 2, 3)
>>> reshape_nd(numpy.empty((3, 4, 5)), 3).shape
(3, 4, 5)
>>> reshape_nd((2, 3), 3)
(1, 2, 3)
"""
is_shape = isinstance(data_or_shape, tuple)
shape = data_or_shape if is_shape else data_or_shape.shape
if len(shape) >= ndim:
return data_or_shape
shape = (1,) * (ndim - len(shape)) + shape
return shape if is_shape else data_or_shape.reshape(shape)
def squeeze_axes(shape, axes, skip='XY'):
"""Return shape and axes with single-dimensional entries removed.
Remove unused dimensions unless their axes are listed in 'skip'.
>>> squeeze_axes((5, 1, 2, 1, 1), 'TZYXC')
((5, 2, 1), 'TYX')
"""
if len(shape) != len(axes):
raise ValueError('dimensions of axes and shape do not match')
shape, axes = zip(*(i for i in zip(shape, axes)
if i[0] > 1 or i[1] in skip))
return tuple(shape), ''.join(axes)
def transpose_axes(image, axes, asaxes='CTZYX'):
"""Return image with its axes permuted to match specified axes.
A view is returned if possible.
>>> transpose_axes(numpy.zeros((2, 3, 4, 5)), 'TYXC', asaxes='CTZYX').shape
(5, 2, 1, 3, 4)
"""
for ax in axes:
if ax not in asaxes:
raise ValueError('unknown axis %s' % ax)
# add missing axes to image
shape = image.shape
for ax in reversed(asaxes):
if ax not in axes:
axes = ax + axes
shape = (1,) + shape
image = image.reshape(shape)
# transpose axes
image = image.transpose([axes.index(ax) for ax in asaxes])
return image
def reshape_axes(axes, shape, newshape, unknown='Q'):
"""Return axes matching new shape.
Unknown dimensions are labelled 'Q'.
>>> reshape_axes('YXS', (219, 301, 1), (219, 301))
'YX'
>>> reshape_axes('IYX', (12, 219, 301), (3, 4, 219, 1, 301, 1))
'QQYQXQ'
"""
shape = tuple(shape)
newshape = tuple(newshape)
if len(axes) != len(shape):
raise ValueError('axes do not match shape')
size = product(shape)
newsize = product(newshape)
if size != newsize:
raise ValueError('cannot reshape %s to %s' % (shape, newshape))
if not axes or not newshape:
return ''
lendiff = max(0, len(shape) - len(newshape))
if lendiff:
newshape = newshape + (1,) * lendiff
i = len(shape)-1
prodns = 1
prods = 1
result = []
for ns in newshape[::-1]:
prodns *= ns
while i > 0 and shape[i] == 1 and ns != 1:
i -= 1
if ns == shape[i] and prodns == prods*shape[i]:
prods *= shape[i]
result.append(axes[i])
i -= 1
else:
result.append(unknown)
return ''.join(reversed(result[lendiff:]))
def stack_pages(pages, out=None, maxworkers=None, **kwargs):
"""Read data from sequence of TiffPage and stack them vertically.
Additional parameters are passsed to the TiffPage.asarray function.
"""
npages = len(pages)
if npages == 0:
raise ValueError('no pages')
if npages == 1:
kwargs['maxworkers'] = maxworkers
return pages[0].asarray(out=out, **kwargs)
page0 = next(p for p in pages if p is not None).keyframe
page0.asarray(validate=None) # ThreadPoolExecutor swallows exceptions
shape = (npages,) + page0.shape
dtype = page0.dtype
out = create_output(out, shape, dtype)
if maxworkers is None:
if page0.compression > 1:
if page0.is_tiled:
maxworkers = 1
kwargs['maxworkers'] = 0
else:
maxworkers = 0
else:
maxworkers = 1
if maxworkers == 0:
import multiprocessing # noqa: delay import
maxworkers = multiprocessing.cpu_count() // 2
if maxworkers > 1:
kwargs['maxworkers'] = 1
page0.parent.filehandle.lock = maxworkers > 1
filecache = OpenFileCache(size=max(4, maxworkers),
lock=page0.parent.filehandle.lock)
def func(page, index, out=out, filecache=filecache, kwargs=kwargs):
"""Read, decode, and copy page data."""
if page is not None:
filecache.open(page.parent.filehandle)
out[index] = page.asarray(lock=filecache.lock, reopen=False,
validate=False, **kwargs)
filecache.close(page.parent.filehandle)
if maxworkers < 2:
for i, page in enumerate(pages):
func(page, i)
else:
# TODO: add exception handling
with concurrent.futures.ThreadPoolExecutor(maxworkers) as executor:
executor.map(func, pages, range(npages))
filecache.clear()
page0.parent.filehandle.lock = None
return out
def clean_offsets_counts(offsets, counts):
"""Return cleaned offsets and byte counts.
Remove zero offsets and counts. Use to sanitize _offsets and _bytecounts
tag values for strips or tiles.
"""
offsets = list(offsets)
counts = list(counts)
if len(offsets) != len(counts):
raise ValueError('StripOffsets and StripByteCounts mismatch')
j = 0
for i, (o, b) in enumerate(zip(offsets, counts)):
if o > 0 and b > 0:
if i > j:
offsets[j] = o
counts[j] = b
j += 1
elif b > 0 and o <= 0:
raise ValueError('invalid offset')
else:
logging.warning('clean_offsets_counts: empty byte count')
if j == 0:
j = 1
return offsets[:j], counts[:j]
def buffered_read(fh, lock, offsets, bytecounts, buffersize=2**26):
"""Return iterator over segments read from file."""
length = len(offsets)
i = 0
while i < length:
data = []
with lock:
size = 0
while size < buffersize and i < length:
fh.seek(offsets[i])
bytecount = bytecounts[i]
data.append(fh.read(bytecount))
# buffer = bytearray(bytecount)
# n = fh.readinto(buffer)
# data.append(buffer[:n])
size += bytecount
i += 1
for segment in data:
yield segment
def create_output(out, shape, dtype, mode='w+', suffix='.memmap'):
"""Return numpy array where image data of shape and dtype can be copied.
The 'out' parameter may have the following values or types:
None
An empty array of shape and dtype is created and returned.
numpy.ndarray
An existing writable array of compatible dtype and shape. A view of
the same array is returned after verification.
'memmap' or 'memmap:tempdir'
A memory-map to an array stored in a temporary binary file on disk
is created and returned.
str or open file
The file name or file object used to create a memory-map to an array
stored in a binary file on disk. The created memory-mapped array is
returned.
"""
if out is None:
return numpy.zeros(shape, dtype)
if isinstance(out, str) and out[:6] == 'memmap':
import tempfile # noqa: delay import
tempdir = out[7:] if len(out) > 7 else None
with tempfile.NamedTemporaryFile(dir=tempdir, suffix=suffix) as fh:
return numpy.memmap(fh, shape=shape, dtype=dtype, mode=mode)
if isinstance(out, numpy.ndarray):
if product(shape) != product(out.shape):
raise ValueError('incompatible output shape')
if not numpy.can_cast(dtype, out.dtype):
raise ValueError('incompatible output dtype')
return out.reshape(shape)
if isinstance(out, pathlib.Path):
out = str(out)
return numpy.memmap(out, shape=shape, dtype=dtype, mode=mode)
def matlabstr2py(string):
"""Return Python object from Matlab string representation.
Return str, bool, int, float, list (Matlab arrays or cells), or
dict (Matlab structures) types.
Use to access ScanImage metadata.
>>> matlabstr2py('1')
1
>>> matlabstr2py("['x y z' true false; 1 2.0 -3e4; NaN Inf @class]")
[['x y z', True, False], [1, 2.0, -30000.0], [nan, inf, '@class']]
>>> d = matlabstr2py("SI.hChannels.channelType = {'stripe' 'stripe'}\\n"
... "SI.hChannels.channelsActive = 2")
>>> d['SI.hChannels.channelType']
['stripe', 'stripe']
"""
# TODO: handle invalid input
# TODO: review unboxing of multidimensional arrays
def lex(s):
# return sequence of tokens from matlab string representation
tokens = ['[']
while True:
t, i = next_token(s)
if t is None:
break
if t == ';':
tokens.extend((']', '['))
elif t == '[':
tokens.extend(('[', '['))
elif t == ']':
tokens.extend((']', ']'))
else:
tokens.append(t)
s = s[i:]
tokens.append(']')
return tokens
def next_token(s):
# return next token in matlab string
length = len(s)
if length == 0:
return None, 0
i = 0
while i < length and s[i] == ' ':
i += 1
if i == length:
return None, i
if s[i] in '{[;]}':
return s[i], i + 1
if s[i] == "'":
j = i + 1
while j < length and s[j] != "'":
j += 1
return s[i: j+1], j + 1
if s[i] == '<':
j = i + 1
while j < length and s[j] != '>':
j += 1
return s[i: j+1], j + 1
j = i
while j < length and not s[j] in ' {[;]}':
j += 1
return s[i:j], j
def value(s, fail=False):
# return Python value of token
s = s.strip()
if not s:
return s
if len(s) == 1:
try:
return int(s)
except Exception:
if fail:
raise ValueError()
return s
if s[0] == "'":
if fail and s[-1] != "'" or "'" in s[1:-1]:
raise ValueError()
return s[1:-1]
if s[0] == '<':
if fail and s[-1] != '>' or '<' in s[1:-1]:
raise ValueError()
return s
if fail and any(i in s for i in " ';[]{}"):
raise ValueError()
if s[0] == '@':
return s
if s in ('true', 'True'):
return True
if s in ('false', 'False'):
return False
if s[:6] == 'zeros(':
return numpy.zeros([int(i) for i in s[6:-1].split(',')]).tolist()
if s[:5] == 'ones(':
return numpy.ones([int(i) for i in s[5:-1].split(',')]).tolist()
if '.' in s or 'e' in s:
try:
return float(s)
except Exception:
pass
try:
return int(s)
except Exception:
pass
try:
return float(s) # nan, inf
except Exception:
if fail:
raise ValueError()
return s
def parse(s):
# return Python value from string representation of Matlab value
s = s.strip()
try:
return value(s, fail=True)
except ValueError:
pass
result = add2 = []
levels = [add2]
for t in lex(s):
if t in '[{':
add2 = []
levels.append(add2)
elif t in ']}':
x = levels.pop()
if len(x) == 1 and isinstance(x[0], (list, str)):
x = x[0]
add2 = levels[-1]
add2.append(x)
else:
add2.append(value(t))
if len(result) == 1 and isinstance(result[0], (list, str)):
result = result[0]
return result
if '\r' in string or '\n' in string:
# structure
d = {}
for line in string.splitlines():
line = line.strip()
if not line or line[0] == '%':
continue
k, v = line.split('=', 1)
k = k.strip()
if any(c in k for c in " ';[]{}<>"):
continue
d[k] = parse(v)
return d
return parse(string)
def stripnull(string, null=b'\x00'):
"""Return string truncated at first null character.
Clean NULL terminated C strings. For unicode strings use null='\\0'.
>>> stripnull(b'string\\x00')
b'string'
>>> stripnull('string\\x00', null='\\0')
'string'
"""
i = string.find(null)
return string if (i < 0) else string[:i]
def stripascii(string):
"""Return string truncated at last byte that is 7-bit ASCII.
Clean NULL separated and terminated TIFF strings.
>>> stripascii(b'string\\x00string\\n\\x01\\x00')
b'string\\x00string\\n'
>>> stripascii(b'\\x00')
b''
"""
# TODO: pythonize this
i = len(string)
while i:
i -= 1
if 8 < byte2int(string[i]) < 127:
break
else:
i = -1
return string[:i+1]
def asbool(value, true=(b'true', u'true'), false=(b'false', u'false')):
"""Return string as bool if possible, else raise TypeError.
>>> asbool(b' False ')
False
"""
value = value.strip().lower()
if value in true: # might raise UnicodeWarning/BytesWarning
return True
if value in false:
return False
raise TypeError()
def astype(value, types=None):
"""Return argument as one of types if possible.
>>> astype('42')
42
>>> astype('3.14')
3.14
>>> astype('True')
True
>>> astype(b'Neee-Wom')
'Neee-Wom'
"""
if types is None:
types = int, float, asbool, bytes2str
for typ in types:
try:
return typ(value)
except (ValueError, AttributeError, TypeError, UnicodeEncodeError):
pass
return value
def format_size(size, threshold=1536):
"""Return file size as string from byte size.
>>> format_size(1234)
'1234 B'
>>> format_size(12345678901)
'11.50 GiB'
"""
if size < threshold:
return "%i B" % size
for unit in ('KiB', 'MiB', 'GiB', 'TiB', 'PiB'):
size /= 1024.0
if size < threshold:
return "%.2f %s" % (size, unit)
return 'ginormous'
def identityfunc(arg, *args, **kwargs):
"""Single argument identity function.
>>> identityfunc('arg')
'arg'
"""
return arg
def nullfunc(*args, **kwargs):
"""Null function.
>>> nullfunc('arg', kwarg='kwarg')
"""
return
def sequence(value):
"""Return tuple containing value if value is not a tuple or list.
>>> sequence(1)
(1,)
>>> sequence([1])
[1]
>>> sequence('ab')
('ab',)
"""
return value if isinstance(value, (tuple, list)) else (value,)
def product(iterable):
"""Return product of sequence of numbers.
Equivalent of functools.reduce(operator.mul, iterable, 1).
Multiplying numpy integers might overflow.
>>> product([2**8, 2**30])
274877906944
>>> product([])
1
"""
prod = 1
for i in iterable:
prod *= i
return prod
def natural_sorted(iterable):
"""Return human sorted list of strings.
E.g. for sorting file names.
>>> natural_sorted(['f1', 'f2', 'f10'])
['f1', 'f2', 'f10']
"""
def sortkey(x):
return [(int(c) if c.isdigit() else c) for c in re.split(numbers, x)]
numbers = re.compile(r'(\d+)')
return sorted(iterable, key=sortkey)
def excel_datetime(timestamp, epoch=datetime.datetime.fromordinal(693594)):
"""Return datetime object from timestamp in Excel serial format.
Convert LSM time stamps.
>>> excel_datetime(40237.029999999795)
datetime.datetime(2010, 2, 28, 0, 43, 11, 999982)
"""
return epoch + datetime.timedelta(timestamp)
def julian_datetime(julianday, milisecond=0):
"""Return datetime from days since 1/1/4713 BC and ms since midnight.
Convert Julian dates according to MetaMorph.
>>> julian_datetime(2451576, 54362783)
datetime.datetime(2000, 2, 2, 15, 6, 2, 783)
"""
if julianday <= 1721423:
# no datetime before year 1
return None
a = julianday + 1
if a > 2299160:
alpha = math.trunc((a - 1867216.25) / 36524.25)
a += 1 + alpha - alpha // 4
b = a + (1524 if a > 1721423 else 1158)
c = math.trunc((b - 122.1) / 365.25)
d = math.trunc(365.25 * c)
e = math.trunc((b - d) / 30.6001)
day = b - d - math.trunc(30.6001 * e)
month = e - (1 if e < 13.5 else 13)
year = c - (4716 if month > 2.5 else 4715)
hour, milisecond = divmod(milisecond, 1000 * 60 * 60)
minute, milisecond = divmod(milisecond, 1000 * 60)
second, milisecond = divmod(milisecond, 1000)
return datetime.datetime(year, month, day,
hour, minute, second, milisecond)
def byteorder_isnative(byteorder):
"""Return if byteorder matches the system's byteorder.
>>> byteorder_isnative('=')
True
"""
if byteorder in ('=', sys.byteorder):
return True
keys = {'big': '>', 'little': '<'}
return keys.get(byteorder, byteorder) == keys[sys.byteorder]
def recarray2dict(recarray):
"""Return numpy.recarray as dict."""
# TODO: subarrays
result = {}
for descr, value in zip(recarray.dtype.descr, recarray):
name, dtype = descr[:2]
if dtype[1] == 'S':
value = bytes2str(stripnull(value))
elif value.ndim < 2:
value = value.tolist()
result[name] = value
return result
def xml2dict(xml, sanitize=True, prefix=None):
"""Return XML as dict.
>>> xml2dict('1')
{'root': {'key': 1, 'attr': 'name'}}
"""
from xml.etree import cElementTree as etree # delayed import
at = tx = ''
if prefix:
at, tx = prefix
def astype(value):
# return value as int, float, bool, or str
for t in (int, float, asbool):
try:
return t(value)
except Exception:
pass
return value
def etree2dict(t):
# adapted from https://stackoverflow.com/a/10077069/453463
key = t.tag
if sanitize:
key = key.rsplit('}', 1)[-1]
d = {key: {} if t.attrib else None}
children = list(t)
if children:
dd = collections.defaultdict(list)
for dc in map(etree2dict, children):
for k, v in dc.items():
dd[k].append(astype(v))
d = {key: {k: astype(v[0]) if len(v) == 1 else astype(v)
for k, v in dd.items()}}
if t.attrib:
d[key].update((at + k, astype(v)) for k, v in t.attrib.items())
if t.text:
text = t.text.strip()
if children or t.attrib:
if text:
d[key][tx + 'value'] = astype(text)
else:
d[key] = astype(text)
return d
return etree2dict(etree.fromstring(xml))
def hexdump(bytestr, width=75, height=24, snipat=-2, modulo=2, ellipsis='...'):
"""Return hexdump representation of byte string.
>>> hexdump(binascii.unhexlify('49492a00080000000e00fe0004000100'))
'49 49 2a 00 08 00 00 00 0e 00 fe 00 04 00 01 00 II*.............'
"""
size = len(bytestr)
if size < 1 or width < 2 or height < 1:
return ''
if height == 1:
addr = b''
bytesperline = min(modulo * (((width - len(addr)) // 4) // modulo),
size)
if bytesperline < 1:
return ''
nlines = 1
else:
addr = b'%%0%ix: ' % len(b'%x' % size)
bytesperline = min(modulo * (((width - len(addr % 1)) // 4) // modulo),
size)
if bytesperline < 1:
return ''
width = 3*bytesperline + len(addr % 1)
nlines = (size - 1) // bytesperline + 1
if snipat is None or snipat == 1:
snipat = height
elif 0 < abs(snipat) < 1:
snipat = int(math.floor(height * snipat))
if snipat < 0:
snipat += height
if height == 1 or nlines == 1:
blocks = [(0, bytestr[:bytesperline])]
addr = b''
height = 1
width = 3 * bytesperline
elif height is None or nlines <= height:
blocks = [(0, bytestr)]
elif snipat <= 0:
start = bytesperline * (nlines - height)
blocks = [(start, bytestr[start:])] # (start, None)
elif snipat >= height or height < 3:
end = bytesperline * height
blocks = [(0, bytestr[:end])] # (end, None)
else:
end1 = bytesperline * snipat
end2 = bytesperline * (height - snipat - 1)
blocks = [(0, bytestr[:end1]),
(size-end1-end2, None),
(size-end2, bytestr[size-end2:])]
ellipsis = str2bytes(ellipsis)
result = []
for start, bytestr in blocks:
if bytestr is None:
result.append(ellipsis) # 'skip %i bytes' % start)
continue
hexstr = binascii.hexlify(bytestr)
strstr = re.sub(br'[^\x20-\x7f]', b'.', bytestr)
for i in range(0, len(bytestr), bytesperline):
h = hexstr[2*i:2*i+bytesperline*2]
r = (addr % (i + start)) if height > 1 else addr
r += b' '.join(h[i:i+2] for i in range(0, 2*bytesperline, 2))
r += b' ' * (width - len(r))
r += strstr[i:i+bytesperline]
result.append(r)
result = b'\n'.join(result)
if sys.version_info[0] == 3:
result = result.decode('ascii')
return result
def isprintable(string):
"""Return if all characters in string are printable.
>>> isprintable('abc')
True
>>> isprintable(b'\01')
False
"""
string = string.strip()
if not string:
return True
if sys.version_info[0] == 3:
try:
return string.isprintable()
except Exception:
pass
try:
return string.decode('utf-8').isprintable()
except Exception:
pass
else:
if string.isalnum():
return True
printable = ('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRST'
'UVWXYZ!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ \t\n\r\x0b\x0c')
return all(c in printable for c in string)
def clean_whitespace(string, compact=False):
"""Return string with compressed whitespace."""
for a, b in (('\r\n', '\n'), ('\r', '\n'), ('\n\n', '\n'),
('\t', ' '), (' ', ' ')):
string = string.replace(a, b)
if compact:
for a, b in (('\n', ' '), ('[ ', '['),
(' ', ' '), (' ', ' '), (' ', ' ')):
string = string.replace(a, b)
return string.strip()
def pformat_xml(xml):
"""Return pretty formatted XML."""
try:
from lxml import etree # delayed import
if not isinstance(xml, bytes):
xml = xml.encode('utf-8')
xml = etree.parse(io.BytesIO(xml))
xml = etree.tostring(xml, pretty_print=True, xml_declaration=True,
encoding=xml.docinfo.encoding)
xml = bytes2str(xml)
except Exception:
if isinstance(xml, bytes):
xml = bytes2str(xml)
xml = xml.replace('><', '>\n<')
return xml.replace(' ', ' ').replace('\t', ' ')
def pformat(arg, width=79, height=24, compact=True):
"""Return pretty formatted representation of object as string.
Whitespace might be altered.
"""
if height is None or height < 1:
height = 1024
if width is None or width < 1:
width = 256
npopt = numpy.get_printoptions()
numpy.set_printoptions(threshold=100, linewidth=width)
if isinstance(arg, basestring):
if arg[:5].lower() in (' height:
arg = '\n'.join(argl[:height//2] + ['...'] + argl[-height//2:])
return arg
def snipstr(string, width=79, snipat=0.5, ellipsis='...'):
"""Return string cut to specified length.
>>> snipstr('abcdefghijklmnop', 8)
'abc...op'
"""
if ellipsis is None:
if isinstance(string, bytes):
ellipsis = b'...'
else:
ellipsis = u'\u2026' # does not print on win-py3.5
esize = len(ellipsis)
splitlines = string.splitlines()
# TODO: finish and test multiline snip
result = []
for line in splitlines:
if line is None:
result.append(ellipsis)
continue
linelen = len(line)
if linelen <= width:
result.append(string)
continue
split = snipat
if split is None or split == 1:
split = linelen
elif 0 < abs(split) < 1:
split = int(math.floor(linelen * split))
if split < 0:
split += linelen
if split < 0:
split = 0
if esize == 0 or width < esize + 1:
if split <= 0:
result.append(string[-width:])
else:
result.append(string[:width])
elif split <= 0:
result.append(ellipsis + string[esize-width:])
elif split >= linelen or width < esize + 4:
result.append(string[:width-esize] + ellipsis)
else:
splitlen = linelen - width + esize
end1 = split - splitlen // 2
end2 = end1 + splitlen
result.append(string[:end1] + ellipsis + string[end2:])
if isinstance(string, bytes):
return b'\n'.join(result)
return '\n'.join(result)
def enumarg(enum, arg):
"""Return enum member from its name or value.
>>> enumarg(TIFF.PHOTOMETRIC, 2)
>>> enumarg(TIFF.PHOTOMETRIC, 'RGB')
"""
try:
return enum(arg)
except Exception:
try:
return enum[arg.upper()]
except Exception:
raise ValueError('invalid argument %s' % arg)
def parse_kwargs(kwargs, *keys, **keyvalues):
"""Return dict with keys from keys|keyvals and values from kwargs|keyvals.
Existing keys are deleted from kwargs.
>>> kwargs = {'one': 1, 'two': 2, 'four': 4}
>>> kwargs2 = parse_kwargs(kwargs, 'two', 'three', four=None, five=5)
>>> kwargs == {'one': 1}
True
>>> kwargs2 == {'two': 2, 'four': 4, 'five': 5}
True
"""
result = {}
for key in keys:
if key in kwargs:
result[key] = kwargs[key]
del kwargs[key]
for key, value in keyvalues.items():
if key in kwargs:
result[key] = kwargs[key]
del kwargs[key]
else:
result[key] = value
return result
def update_kwargs(kwargs, **keyvalues):
"""Update dict with keys and values if keys do not already exist.
>>> kwargs = {'one': 1, }
>>> update_kwargs(kwargs, one=None, two=2)
>>> kwargs == {'one': 1, 'two': 2}
True
"""
for key, value in keyvalues.items():
if key not in kwargs:
kwargs[key] = value
def validate_jhove(filename, jhove='jhove', ignore=('More than 50 IFDs',)):
"""Validate TIFF file using jhove -m TIFF-hul.
Raise ValueError if jhove outputs an error message unless the message
contains one of the strings in 'ignore'.
JHOVE does not support bigtiff or more than 50 IFDs.
See `JHOVE TIFF-hul Module `_
"""
import subprocess # noqa: delayed import
out = subprocess.check_output([jhove, filename, '-m', 'TIFF-hul'])
if b'ErrorMessage: ' in out:
for line in out.splitlines():
line = line.strip()
if line.startswith(b'ErrorMessage: '):
error = line[14:].decode('utf8')
for i in ignore:
if i in error:
break
else:
raise ValueError(error)
break
def lsm2bin(lsmfile, binfile=None, tile=(256, 256), verbose=True):
"""Convert [MP]TZCYX LSM file to series of BIN files.
One BIN file containing 'ZCYX' data are created for each position, time,
and tile. The position, time, and tile indices are encoded at the end
of the filenames.
"""
verbose = print_ if verbose else nullfunc
if binfile is None:
binfile = lsmfile
elif binfile.lower() == 'none':
binfile = None
if binfile:
binfile += '_(z%ic%iy%ix%i)_m%%ip%%it%%03iy%%ix%%i.bin'
verbose('\nOpening LSM file... ', end='', flush=True)
start_time = time.time()
with TiffFile(lsmfile) as lsm:
if not lsm.is_lsm:
verbose('\n', lsm, flush=True)
raise ValueError('not a LSM file')
series = lsm.series[0] # first series contains the image data
shape = series.shape
axes = series.axes
dtype = series.dtype
size = product(shape) * dtype.itemsize
verbose('%.3f s' % (time.time() - start_time))
# verbose(lsm, flush=True)
verbose('Image\n axes: %s\n shape: %s\n dtype: %s\n size: %s'
% (axes, shape, dtype, format_size(size)), flush=True)
if not series.axes.endswith('TZCYX'):
raise ValueError('not a *TZCYX LSM file')
verbose('Copying image from LSM to BIN files', end='', flush=True)
start_time = time.time()
tiles = shape[-2] // tile[-2], shape[-1] // tile[-1]
if binfile:
binfile = binfile % (shape[-4], shape[-3], tile[0], tile[1])
shape = (1,) * (7-len(shape)) + shape
# cache for ZCYX stacks and output files
data = numpy.empty(shape[3:], dtype=dtype)
out = numpy.empty((shape[-4], shape[-3], tile[0], tile[1]),
dtype=dtype)
# iterate over Tiff pages containing data
pages = iter(series.pages)
for m in range(shape[0]): # mosaic axis
for p in range(shape[1]): # position axis
for t in range(shape[2]): # time axis
for z in range(shape[3]): # z slices
data[z] = next(pages).asarray()
for y in range(tiles[0]): # tile y
for x in range(tiles[1]): # tile x
out[:] = data[...,
y*tile[0]:(y+1)*tile[0],
x*tile[1]:(x+1)*tile[1]]
if binfile:
out.tofile(binfile % (m, p, t, y, x))
verbose('.', end='', flush=True)
verbose(' %.3f s' % (time.time() - start_time))
def imshow(data, photometric='RGB', planarconfig=None, bitspersample=None,
interpolation=None, cmap=None, vmin=0, vmax=None,
figure=None, title=None, dpi=96, subplot=111, maxdim=32768,
**kwargs):
"""Plot n-dimensional images using matplotlib.pyplot.
Return figure, subplot and plot axis.
Requires pyplot already imported C{from matplotlib import pyplot}.
Parameters
----------
data : nd array
The image data.
photometric : {'MINISWHITE', 'MINISBLACK', 'RGB', or 'PALETTE'}
The color space of the image data.
planarconfig : {'CONTIG' or 'SEPARATE'}
Defines how components of each pixel are stored.
bitspersample : int
Number of bits per channel in integer RGB images.
interpolation : str
The image interpolation method used in matplotlib.imshow. By default,
'nearest' will be used for image dimensions <= 512, else 'bilinear'.
cmap : str or matplotlib.colors.Colormap
The colormap maps non-RGBA scalar data to colors.
vmin, vmax : scalar
Data range covered by the colormap. By default, the complete
range of the data is covered.
figure : matplotlib.figure.Figure
Matplotlib figure to use for plotting.
title : str
Window and subplot title.
subplot : int
A matplotlib.pyplot.subplot axis.
maxdim : int
Maximum image width and length.
kwargs : dict
Additional arguments for matplotlib.pyplot.imshow.
"""
# TODO: rewrite detection of isrgb, iscontig
# TODO: use planarconfig
isrgb = photometric in ('RGB', 'YCBCR') # 'PALETTE', 'YCBCR'
if data.dtype == 'float16':
data = data.astype('float32')
if data.dtype.kind == 'b':
isrgb = False
if isrgb and not (data.shape[-1] in (3, 4) or (
data.ndim > 2 and data.shape[-3] in (3, 4))):
isrgb = False
photometric = 'MINISBLACK'
data = data.squeeze()
if photometric in ('MINISWHITE', 'MINISBLACK', None):
data = reshape_nd(data, 2)
else:
data = reshape_nd(data, 3)
dims = data.ndim
if dims < 2:
raise ValueError('not an image')
elif dims == 2:
dims = 0
isrgb = False
else:
if isrgb and data.shape[-3] in (3, 4):
data = numpy.swapaxes(data, -3, -2)
data = numpy.swapaxes(data, -2, -1)
elif not isrgb and (data.shape[-1] < data.shape[-2] // 8 and
data.shape[-1] < data.shape[-3] // 8 and
data.shape[-1] < 5):
data = numpy.swapaxes(data, -3, -1)
data = numpy.swapaxes(data, -2, -1)
isrgb = isrgb and data.shape[-1] in (3, 4)
dims -= 3 if isrgb else 2
if interpolation is None:
threshold = 512
elif isinstance(interpolation, int):
threshold = interpolation
else:
threshold = 0
if isrgb:
data = data[..., :maxdim, :maxdim, :maxdim]
if threshold:
if (data.shape[-2] > threshold or data.shape[-3] > threshold):
interpolation = 'bilinear'
else:
interpolation = 'nearest'
else:
data = data[..., :maxdim, :maxdim]
if threshold:
if (data.shape[-1] > threshold or data.shape[-2] > threshold):
interpolation = 'bilinear'
else:
interpolation = 'nearest'
if photometric == 'PALETTE' and isrgb:
datamax = data.max()
if datamax > 255:
data = data >> 8 # possible precision loss
data = data.astype('B')
elif data.dtype.kind in 'ui':
if not (isrgb and data.dtype.itemsize <= 1) or bitspersample is None:
try:
bitspersample = int(math.ceil(math.log(data.max(), 2)))
except Exception:
bitspersample = data.dtype.itemsize * 8
elif not isinstance(bitspersample, inttypes):
# bitspersample can be tuple, e.g. (5, 6, 5)
bitspersample = data.dtype.itemsize * 8
datamax = 2**bitspersample
if isrgb:
if bitspersample < 8:
data = data << (8 - bitspersample)
elif bitspersample > 8:
data = data >> (bitspersample - 8) # precision loss
data = data.astype('B')
elif data.dtype.kind == 'f':
datamax = data.max()
if isrgb and datamax > 1.0:
if data.dtype.char == 'd':
data = data.astype('f')
data /= datamax
else:
data = data / datamax
elif data.dtype.kind == 'b':
datamax = 1
elif data.dtype.kind == 'c':
data = numpy.absolute(data)
datamax = data.max()
if not isrgb:
if vmax is None:
vmax = datamax
if vmin is None:
if data.dtype.kind == 'i':
dtmin = numpy.iinfo(data.dtype).min
vmin = numpy.min(data)
if vmin == dtmin:
vmin = numpy.min(data > dtmin)
if data.dtype.kind == 'f':
dtmin = numpy.finfo(data.dtype).min
vmin = numpy.min(data)
if vmin == dtmin:
vmin = numpy.min(data > dtmin)
else:
vmin = 0
pyplot = sys.modules['matplotlib.pyplot']
if figure is None:
pyplot.rc('font', family='sans-serif', weight='normal', size=8)
figure = pyplot.figure(dpi=dpi, figsize=(10.3, 6.3), frameon=True,
facecolor='1.0', edgecolor='w')
try:
figure.canvas.manager.window.title(title)
except Exception:
pass
size = len(title.splitlines()) if title else 1
pyplot.subplots_adjust(bottom=0.03*(dims+2), top=0.98-size*0.03,
left=0.1, right=0.95, hspace=0.05, wspace=0.0)
subplot = pyplot.subplot(subplot)
if title:
try:
title = unicode(title, 'Windows-1252')
except TypeError:
pass
pyplot.title(title, size=11)
if cmap is None:
if data.dtype.char == '?':
cmap = 'gray'
elif data.dtype.kind in 'buf' or vmin == 0:
cmap = 'viridis'
else:
cmap = 'coolwarm'
if photometric == 'MINISWHITE':
cmap += '_r'
image = pyplot.imshow(numpy.atleast_2d(data[(0,) * dims].squeeze()),
vmin=vmin, vmax=vmax, cmap=cmap,
interpolation=interpolation, **kwargs)
if not isrgb:
pyplot.colorbar() # panchor=(0.55, 0.5), fraction=0.05
def format_coord(x, y):
# callback function to format coordinate display in toolbar
x = int(x + 0.5)
y = int(y + 0.5)
try:
if dims:
return '%s @ %s [%4i, %4i]' % (
curaxdat[1][y, x], current, y, x)
return '%s @ [%4i, %4i]' % (data[y, x], y, x)
except IndexError:
return ''
def none(event):
return ''
subplot.format_coord = format_coord
image.get_cursor_data = none
image.format_cursor_data = none
if dims:
current = list((0,) * dims)
curaxdat = [0, data[tuple(current)].squeeze()]
sliders = [pyplot.Slider(
pyplot.axes([0.125, 0.03*(axis+1), 0.725, 0.025]),
'Dimension %i' % axis, 0, data.shape[axis]-1, 0, facecolor='0.5',
valfmt='%%.0f [%i]' % data.shape[axis]) for axis in range(dims)]
for slider in sliders:
slider.drawon = False
def set_image(current, sliders=sliders, data=data):
# change image and redraw canvas
curaxdat[1] = data[tuple(current)].squeeze()
image.set_data(curaxdat[1])
for ctrl, index in zip(sliders, current):
ctrl.eventson = False
ctrl.set_val(index)
ctrl.eventson = True
figure.canvas.draw()
def on_changed(index, axis, data=data, current=current):
# callback function for slider change event
index = int(round(index))
curaxdat[0] = axis
if index == current[axis]:
return
if index >= data.shape[axis]:
index = 0
elif index < 0:
index = data.shape[axis] - 1
current[axis] = index
set_image(current)
def on_keypressed(event, data=data, current=current):
# callback function for key press event
key = event.key
axis = curaxdat[0]
if str(key) in '0123456789':
on_changed(key, axis)
elif key == 'right':
on_changed(current[axis] + 1, axis)
elif key == 'left':
on_changed(current[axis] - 1, axis)
elif key == 'up':
curaxdat[0] = 0 if axis == len(data.shape)-1 else axis + 1
elif key == 'down':
curaxdat[0] = len(data.shape)-1 if axis == 0 else axis - 1
elif key == 'end':
on_changed(data.shape[axis] - 1, axis)
elif key == 'home':
on_changed(0, axis)
figure.canvas.mpl_connect('key_press_event', on_keypressed)
for axis, ctrl in enumerate(sliders):
ctrl.on_changed(lambda k, a=axis: on_changed(k, a))
return figure, subplot, image
def _app_show():
"""Block the GUI. For use as skimage plugin."""
pyplot = sys.modules['matplotlib.pyplot']
pyplot.show()
def askopenfilename(**kwargs):
"""Return file name(s) from Tkinter's file open dialog."""
try:
from Tkinter import Tk
import tkFileDialog as filedialog
except ImportError:
from tkinter import Tk, filedialog
root = Tk()
root.withdraw()
root.update()
filenames = filedialog.askopenfilename(**kwargs)
root.destroy()
return filenames
def main(argv=None):
"""Tifffile command line usage main function."""
if argv is None:
argv = sys.argv
import optparse # TODO: use argparse
parser = optparse.OptionParser(
usage='usage: %prog [options] path',
description='Display image data in TIFF files.',
version='%%prog %s' % __version__, prog='tifffile')
opt = parser.add_option
opt('-p', '--page', dest='page', type='int', default=-1,
help='display single page')
opt('-s', '--series', dest='series', type='int', default=-1,
help='display series of pages of same shape')
opt('--nomultifile', dest='nomultifile', action='store_true',
default=False, help='do not read OME series from multiple files')
opt('--noplots', dest='noplots', type='int', default=8,
help='maximum number of plots')
opt('--interpol', dest='interpol', metavar='INTERPOL', default=None,
help='image interpolation method')
opt('--dpi', dest='dpi', type='int', default=96,
help='plot resolution')
opt('--vmin', dest='vmin', type='int', default=None,
help='minimum value for colormapping')
opt('--vmax', dest='vmax', type='int', default=None,
help='maximum value for colormapping')
opt('--debug', dest='debug', action='store_true', default=False,
help='raise exception on failures')
opt('--doctest', dest='doctest', action='store_true', default=False,
help='runs the docstring examples')
opt('-v', '--detail', dest='detail', type='int', default=2)
opt('-q', '--quiet', dest='quiet', action='store_true')
settings, path = parser.parse_args()
path = ' '.join(path)
if settings.doctest:
import doctest
if sys.version_info < (3, 5):
print('Doctests work with Python >=3.6 only')
return 0
doctest.testmod(optionflags=doctest.ELLIPSIS)
return 0
if not path:
path = askopenfilename(title='Select a TIFF file',
filetypes=TIFF.FILEOPEN_FILTER)
if not path:
parser.error('No file specified')
if any(i in path for i in '?*'):
path = glob.glob(path)
if not path:
print('No files match the pattern')
return 0
# TODO: handle image sequences
path = path[0]
if not settings.quiet:
print('\nReading file structure...', end=' ')
start = time.time()
try:
tif = TiffFile(path, multifile=not settings.nomultifile)
except Exception as e:
if settings.debug:
raise
else:
print('\n', e)
sys.exit(0)
if not settings.quiet:
print('%.3f ms' % ((time.time()-start) * 1e3))
if tif.is_ome:
settings.norgb = True
images = []
if settings.noplots > 0:
if not settings.quiet:
print('Reading image data... ', end=' ')
def notnone(x):
return next(i for i in x if i is not None)
start = time.time()
try:
if settings.page >= 0:
images = [(tif.asarray(key=settings.page),
tif[settings.page], None)]
elif settings.series >= 0:
images = [(tif.asarray(series=settings.series),
notnone(tif.series[settings.series]._pages),
tif.series[settings.series])]
else:
for i, s in enumerate(tif.series[:settings.noplots]):
try:
images.append((tif.asarray(series=i),
notnone(s._pages),
tif.series[i]))
except Exception as e:
images.append((None, notnone(s.pages), None))
if settings.debug:
raise
else:
print('\nSeries %i failed: %s... ' % (i, e),
end='')
if not settings.quiet:
print('%.3f ms' % ((time.time()-start) * 1e3))
except Exception as e:
if settings.debug:
raise
else:
print(e)
if not settings.quiet:
print()
print(TiffFile.__str__(tif, detail=int(settings.detail)))
print()
tif.close()
if images and settings.noplots > 0:
try:
import matplotlib
matplotlib.use('TkAgg')
from matplotlib import pyplot
except ImportError as e:
logging.warning('tifffile.main: %s', str(e))
else:
for img, page, series in images:
if img is None:
continue
vmin, vmax = settings.vmin, settings.vmax
if 'GDAL_NODATA' in page.tags:
try:
vmin = numpy.min(
img[img > float(page.tags['GDAL_NODATA'].value)])
except ValueError:
pass
if tif.is_stk:
try:
vmin = tif.stk_metadata['MinScale']
vmax = tif.stk_metadata['MaxScale']
except KeyError:
pass
else:
if vmax <= vmin:
vmin, vmax = settings.vmin, settings.vmax
if series:
title = '%s\n%s\n%s' % (str(tif), str(page), str(series))
else:
title = '%s\n %s' % (str(tif), str(page))
photometric = 'MINISBLACK'
if page.photometric not in (3,):
photometric = TIFF.PHOTOMETRIC(page.photometric).name
imshow(img, title=title, vmin=vmin, vmax=vmax,
bitspersample=page.bitspersample,
photometric=photometric,
interpolation=settings.interpol,
dpi=settings.dpi)
pyplot.show()
return 0
if sys.version_info[0] == 2:
inttypes = int, long # noqa
def print_(*args, **kwargs):
"""Print function with flush support."""
flush = kwargs.pop('flush', False)
print(*args, **kwargs)
if flush:
sys.stdout.flush()
def bytes2str(b, encoding=None, errors=None):
"""Return string from bytes."""
return b
def str2bytes(s, encoding=None):
"""Return bytes from string."""
return s
def byte2int(b):
"""Return value of byte as int."""
return ord(b)
class FileNotFoundError(IOError):
"""FileNotFoundError exception for Python 2."""
TiffFrame = TiffPage # noqa
else:
inttypes = int
basestring = str, bytes
unicode = str
print_ = print
def bytes2str(b, encoding=None, errors='strict'):
"""Return unicode string from encoded bytes."""
if encoding is not None:
return b.decode(encoding, errors)
try:
return b.decode('utf-8', errors)
except UnicodeDecodeError:
return b.decode('cp1252', errors)
def str2bytes(s, encoding='cp1252'):
"""Return bytes from unicode string."""
return s.encode(encoding)
def byte2int(b):
"""Return value of byte as int."""
return b
if __name__ == '__main__':
sys.exit(main())
tifffile-2018.11.28/tifffile/tifffile_geodb.py 0000666 0000000 0000000 00000170243 13340065100 017131 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# tifffile_geodb.py
# GeoTIFF GeoKey Database
# Adapted from http://gis.ess.washington.edu/data/raster/drg/docs/geotiff.txt
import enum
class Proj(enum.IntEnum):
"""Projection Codes."""
Undefined = 0
User_Defined = 32767
Alabama_CS27_East = 10101
Alabama_CS27_West = 10102
Alabama_CS83_East = 10131
Alabama_CS83_West = 10132
Arizona_Coordinate_System_east = 10201
Arizona_Coordinate_System_Central = 10202
Arizona_Coordinate_System_west = 10203
Arizona_CS83_east = 10231
Arizona_CS83_Central = 10232
Arizona_CS83_west = 10233
Arkansas_CS27_North = 10301
Arkansas_CS27_South = 10302
Arkansas_CS83_North = 10331
Arkansas_CS83_South = 10332
California_CS27_I = 10401
California_CS27_II = 10402
California_CS27_III = 10403
California_CS27_IV = 10404
California_CS27_V = 10405
California_CS27_VI = 10406
California_CS27_VII = 10407
California_CS83_1 = 10431
California_CS83_2 = 10432
California_CS83_3 = 10433
California_CS83_4 = 10434
California_CS83_5 = 10435
California_CS83_6 = 10436
Colorado_CS27_North = 10501
Colorado_CS27_Central = 10502
Colorado_CS27_South = 10503
Colorado_CS83_North = 10531
Colorado_CS83_Central = 10532
Colorado_CS83_South = 10533
Connecticut_CS27 = 10600
Connecticut_CS83 = 10630
Delaware_CS27 = 10700
Delaware_CS83 = 10730
Florida_CS27_East = 10901
Florida_CS27_West = 10902
Florida_CS27_North = 10903
Florida_CS83_East = 10931
Florida_CS83_West = 10932
Florida_CS83_North = 10933
Georgia_CS27_East = 11001
Georgia_CS27_West = 11002
Georgia_CS83_East = 11031
Georgia_CS83_West = 11032
Idaho_CS27_East = 11101
Idaho_CS27_Central = 11102
Idaho_CS27_West = 11103
Idaho_CS83_East = 11131
Idaho_CS83_Central = 11132
Idaho_CS83_West = 11133
Illinois_CS27_East = 11201
Illinois_CS27_West = 11202
Illinois_CS83_East = 11231
Illinois_CS83_West = 11232
Indiana_CS27_East = 11301
Indiana_CS27_West = 11302
Indiana_CS83_East = 11331
Indiana_CS83_West = 11332
Iowa_CS27_North = 11401
Iowa_CS27_South = 11402
Iowa_CS83_North = 11431
Iowa_CS83_South = 11432
Kansas_CS27_North = 11501
Kansas_CS27_South = 11502
Kansas_CS83_North = 11531
Kansas_CS83_South = 11532
Kentucky_CS27_North = 11601
Kentucky_CS27_South = 11602
Kentucky_CS83_North = 15303
Kentucky_CS83_South = 11632
Louisiana_CS27_North = 11701
Louisiana_CS27_South = 11702
Louisiana_CS83_North = 11731
Louisiana_CS83_South = 11732
Maine_CS27_East = 11801
Maine_CS27_West = 11802
Maine_CS83_East = 11831
Maine_CS83_West = 11832
Maryland_CS27 = 11900
Maryland_CS83 = 11930
Massachusetts_CS27_Mainland = 12001
Massachusetts_CS27_Island = 12002
Massachusetts_CS83_Mainland = 12031
Massachusetts_CS83_Island = 12032
Michigan_State_Plane_East = 12101
Michigan_State_Plane_Old_Central = 12102
Michigan_State_Plane_West = 12103
Michigan_CS27_North = 12111
Michigan_CS27_Central = 12112
Michigan_CS27_South = 12113
Michigan_CS83_North = 12141
Michigan_CS83_Central = 12142
Michigan_CS83_South = 12143
Minnesota_CS27_North = 12201
Minnesota_CS27_Central = 12202
Minnesota_CS27_South = 12203
Minnesota_CS83_North = 12231
Minnesota_CS83_Central = 12232
Minnesota_CS83_South = 12233
Mississippi_CS27_East = 12301
Mississippi_CS27_West = 12302
Mississippi_CS83_East = 12331
Mississippi_CS83_West = 12332
Missouri_CS27_East = 12401
Missouri_CS27_Central = 12402
Missouri_CS27_West = 12403
Missouri_CS83_East = 12431
Missouri_CS83_Central = 12432
Missouri_CS83_West = 12433
Montana_CS27_North = 12501
Montana_CS27_Central = 12502
Montana_CS27_South = 12503
Montana_CS83 = 12530
Nebraska_CS27_North = 12601
Nebraska_CS27_South = 12602
Nebraska_CS83 = 12630
Nevada_CS27_East = 12701
Nevada_CS27_Central = 12702
Nevada_CS27_West = 12703
Nevada_CS83_East = 12731
Nevada_CS83_Central = 12732
Nevada_CS83_West = 12733
New_Hampshire_CS27 = 12800
New_Hampshire_CS83 = 12830
New_Jersey_CS27 = 12900
New_Jersey_CS83 = 12930
New_Mexico_CS27_East = 13001
New_Mexico_CS27_Central = 13002
New_Mexico_CS27_West = 13003
New_Mexico_CS83_East = 13031
New_Mexico_CS83_Central = 13032
New_Mexico_CS83_West = 13033
New_York_CS27_East = 13101
New_York_CS27_Central = 13102
New_York_CS27_West = 13103
New_York_CS27_Long_Island = 13104
New_York_CS83_East = 13131
New_York_CS83_Central = 13132
New_York_CS83_West = 13133
New_York_CS83_Long_Island = 13134
North_Carolina_CS27 = 13200
North_Carolina_CS83 = 13230
North_Dakota_CS27_North = 13301
North_Dakota_CS27_South = 13302
North_Dakota_CS83_North = 13331
North_Dakota_CS83_South = 13332
Ohio_CS27_North = 13401
Ohio_CS27_South = 13402
Ohio_CS83_North = 13431
Ohio_CS83_South = 13432
Oklahoma_CS27_North = 13501
Oklahoma_CS27_South = 13502
Oklahoma_CS83_North = 13531
Oklahoma_CS83_South = 13532
Oregon_CS27_North = 13601
Oregon_CS27_South = 13602
Oregon_CS83_North = 13631
Oregon_CS83_South = 13632
Pennsylvania_CS27_North = 13701
Pennsylvania_CS27_South = 13702
Pennsylvania_CS83_North = 13731
Pennsylvania_CS83_South = 13732
Rhode_Island_CS27 = 13800
Rhode_Island_CS83 = 13830
South_Carolina_CS27_North = 13901
South_Carolina_CS27_South = 13902
South_Carolina_CS83 = 13930
South_Dakota_CS27_North = 14001
South_Dakota_CS27_South = 14002
South_Dakota_CS83_North = 14031
South_Dakota_CS83_South = 14032
Tennessee_CS27 = 15302
Tennessee_CS83 = 14130
Texas_CS27_North = 14201
Texas_CS27_North_Central = 14202
Texas_CS27_Central = 14203
Texas_CS27_South_Central = 14204
Texas_CS27_South = 14205
Texas_CS83_North = 14231
Texas_CS83_North_Central = 14232
Texas_CS83_Central = 14233
Texas_CS83_South_Central = 14234
Texas_CS83_South = 14235
Utah_CS27_North = 14301
Utah_CS27_Central = 14302
Utah_CS27_South = 14303
Utah_CS83_North = 14331
Utah_CS83_Central = 14332
Utah_CS83_South = 14333
Vermont_CS27 = 14400
Vermont_CS83 = 14430
Virginia_CS27_North = 14501
Virginia_CS27_South = 14502
Virginia_CS83_North = 14531
Virginia_CS83_South = 14532
Washington_CS27_North = 14601
Washington_CS27_South = 14602
Washington_CS83_North = 14631
Washington_CS83_South = 14632
West_Virginia_CS27_North = 14701
West_Virginia_CS27_South = 14702
West_Virginia_CS83_North = 14731
West_Virginia_CS83_South = 14732
Wisconsin_CS27_North = 14801
Wisconsin_CS27_Central = 14802
Wisconsin_CS27_South = 14803
Wisconsin_CS83_North = 14831
Wisconsin_CS83_Central = 14832
Wisconsin_CS83_South = 14833
Wyoming_CS27_East = 14901
Wyoming_CS27_East_Central = 14902
Wyoming_CS27_West_Central = 14903
Wyoming_CS27_West = 14904
Wyoming_CS83_East = 14931
Wyoming_CS83_East_Central = 14932
Wyoming_CS83_West_Central = 14933
Wyoming_CS83_West = 14934
Alaska_CS27_1 = 15001
Alaska_CS27_2 = 15002
Alaska_CS27_3 = 15003
Alaska_CS27_4 = 15004
Alaska_CS27_5 = 15005
Alaska_CS27_6 = 15006
Alaska_CS27_7 = 15007
Alaska_CS27_8 = 15008
Alaska_CS27_9 = 15009
Alaska_CS27_10 = 15010
Alaska_CS83_1 = 15031
Alaska_CS83_2 = 15032
Alaska_CS83_3 = 15033
Alaska_CS83_4 = 15034
Alaska_CS83_5 = 15035
Alaska_CS83_6 = 15036
Alaska_CS83_7 = 15037
Alaska_CS83_8 = 15038
Alaska_CS83_9 = 15039
Alaska_CS83_10 = 15040
Hawaii_CS27_1 = 15101
Hawaii_CS27_2 = 15102
Hawaii_CS27_3 = 15103
Hawaii_CS27_4 = 15104
Hawaii_CS27_5 = 15105
Hawaii_CS83_1 = 15131
Hawaii_CS83_2 = 15132
Hawaii_CS83_3 = 15133
Hawaii_CS83_4 = 15134
Hawaii_CS83_5 = 15135
Puerto_Rico_CS27 = 15201
St_Croix = 15202
Puerto_Rico_Virgin_Is = 15230
BLM_14N_feet = 15914
BLM_15N_feet = 15915
BLM_16N_feet = 15916
BLM_17N_feet = 15917
UTM_zone_1N = 16001
UTM_zone_2N = 16002
UTM_zone_3N = 16003
UTM_zone_4N = 16004
UTM_zone_5N = 16005
UTM_zone_6N = 16006
UTM_zone_7N = 16007
UTM_zone_8N = 16008
UTM_zone_9N = 16009
UTM_zone_10N = 16010
UTM_zone_11N = 16011
UTM_zone_12N = 16012
UTM_zone_13N = 16013
UTM_zone_14N = 16014
UTM_zone_15N = 16015
UTM_zone_16N = 16016
UTM_zone_17N = 16017
UTM_zone_18N = 16018
UTM_zone_19N = 16019
UTM_zone_20N = 16020
UTM_zone_21N = 16021
UTM_zone_22N = 16022
UTM_zone_23N = 16023
UTM_zone_24N = 16024
UTM_zone_25N = 16025
UTM_zone_26N = 16026
UTM_zone_27N = 16027
UTM_zone_28N = 16028
UTM_zone_29N = 16029
UTM_zone_30N = 16030
UTM_zone_31N = 16031
UTM_zone_32N = 16032
UTM_zone_33N = 16033
UTM_zone_34N = 16034
UTM_zone_35N = 16035
UTM_zone_36N = 16036
UTM_zone_37N = 16037
UTM_zone_38N = 16038
UTM_zone_39N = 16039
UTM_zone_40N = 16040
UTM_zone_41N = 16041
UTM_zone_42N = 16042
UTM_zone_43N = 16043
UTM_zone_44N = 16044
UTM_zone_45N = 16045
UTM_zone_46N = 16046
UTM_zone_47N = 16047
UTM_zone_48N = 16048
UTM_zone_49N = 16049
UTM_zone_50N = 16050
UTM_zone_51N = 16051
UTM_zone_52N = 16052
UTM_zone_53N = 16053
UTM_zone_54N = 16054
UTM_zone_55N = 16055
UTM_zone_56N = 16056
UTM_zone_57N = 16057
UTM_zone_58N = 16058
UTM_zone_59N = 16059
UTM_zone_60N = 16060
UTM_zone_1S = 16101
UTM_zone_2S = 16102
UTM_zone_3S = 16103
UTM_zone_4S = 16104
UTM_zone_5S = 16105
UTM_zone_6S = 16106
UTM_zone_7S = 16107
UTM_zone_8S = 16108
UTM_zone_9S = 16109
UTM_zone_10S = 16110
UTM_zone_11S = 16111
UTM_zone_12S = 16112
UTM_zone_13S = 16113
UTM_zone_14S = 16114
UTM_zone_15S = 16115
UTM_zone_16S = 16116
UTM_zone_17S = 16117
UTM_zone_18S = 16118
UTM_zone_19S = 16119
UTM_zone_20S = 16120
UTM_zone_21S = 16121
UTM_zone_22S = 16122
UTM_zone_23S = 16123
UTM_zone_24S = 16124
UTM_zone_25S = 16125
UTM_zone_26S = 16126
UTM_zone_27S = 16127
UTM_zone_28S = 16128
UTM_zone_29S = 16129
UTM_zone_30S = 16130
UTM_zone_31S = 16131
UTM_zone_32S = 16132
UTM_zone_33S = 16133
UTM_zone_34S = 16134
UTM_zone_35S = 16135
UTM_zone_36S = 16136
UTM_zone_37S = 16137
UTM_zone_38S = 16138
UTM_zone_39S = 16139
UTM_zone_40S = 16140
UTM_zone_41S = 16141
UTM_zone_42S = 16142
UTM_zone_43S = 16143
UTM_zone_44S = 16144
UTM_zone_45S = 16145
UTM_zone_46S = 16146
UTM_zone_47S = 16147
UTM_zone_48S = 16148
UTM_zone_49S = 16149
UTM_zone_50S = 16150
UTM_zone_51S = 16151
UTM_zone_52S = 16152
UTM_zone_53S = 16153
UTM_zone_54S = 16154
UTM_zone_55S = 16155
UTM_zone_56S = 16156
UTM_zone_57S = 16157
UTM_zone_58S = 16158
UTM_zone_59S = 16159
UTM_zone_60S = 16160
Gauss_Kruger_zone_0 = 16200
Gauss_Kruger_zone_1 = 16201
Gauss_Kruger_zone_2 = 16202
Gauss_Kruger_zone_3 = 16203
Gauss_Kruger_zone_4 = 16204
Gauss_Kruger_zone_5 = 16205
Map_Grid_of_Australia_48 = 17348
Map_Grid_of_Australia_49 = 17349
Map_Grid_of_Australia_50 = 17350
Map_Grid_of_Australia_51 = 17351
Map_Grid_of_Australia_52 = 17352
Map_Grid_of_Australia_53 = 17353
Map_Grid_of_Australia_54 = 17354
Map_Grid_of_Australia_55 = 17355
Map_Grid_of_Australia_56 = 17356
Map_Grid_of_Australia_57 = 17357
Map_Grid_of_Australia_58 = 17358
Australian_Map_Grid_48 = 17448
Australian_Map_Grid_49 = 17449
Australian_Map_Grid_50 = 17450
Australian_Map_Grid_51 = 17451
Australian_Map_Grid_52 = 17452
Australian_Map_Grid_53 = 17453
Australian_Map_Grid_54 = 17454
Australian_Map_Grid_55 = 17455
Australian_Map_Grid_56 = 17456
Australian_Map_Grid_57 = 17457
Australian_Map_Grid_58 = 17458
Argentina_1 = 18031
Argentina_2 = 18032
Argentina_3 = 18033
Argentina_4 = 18034
Argentina_5 = 18035
Argentina_6 = 18036
Argentina_7 = 18037
Colombia_3W = 18051
Colombia_Bogota = 18052
Colombia_3E = 18053
Colombia_6E = 18054
Egypt_Red_Belt = 18072
Egypt_Purple_Belt = 18073
Extended_Purple_Belt = 18074
New_Zealand_North_Island_Nat_Grid = 18141
New_Zealand_South_Island_Nat_Grid = 18142
Bahrain_Grid = 19900
Netherlands_E_Indies_Equatorial = 19905
RSO_Borneo = 19912
Stereo_70 = 19926
class PCS(enum.IntEnum):
"""Projected CS Type Codes."""
Undefined = 0
User_Defined = 32767
Adindan_UTM_zone_37N = 20137
Adindan_UTM_zone_38N = 20138
AGD66_AMG_zone_48 = 20248
AGD66_AMG_zone_49 = 20249
AGD66_AMG_zone_50 = 20250
AGD66_AMG_zone_51 = 20251
AGD66_AMG_zone_52 = 20252
AGD66_AMG_zone_53 = 20253
AGD66_AMG_zone_54 = 20254
AGD66_AMG_zone_55 = 20255
AGD66_AMG_zone_56 = 20256
AGD66_AMG_zone_57 = 20257
AGD66_AMG_zone_58 = 20258
AGD84_AMG_zone_48 = 20348
AGD84_AMG_zone_49 = 20349
AGD84_AMG_zone_50 = 20350
AGD84_AMG_zone_51 = 20351
AGD84_AMG_zone_52 = 20352
AGD84_AMG_zone_53 = 20353
AGD84_AMG_zone_54 = 20354
AGD84_AMG_zone_55 = 20355
AGD84_AMG_zone_56 = 20356
AGD84_AMG_zone_57 = 20357
AGD84_AMG_zone_58 = 20358
Ain_el_Abd_UTM_zone_37N = 20437
Ain_el_Abd_UTM_zone_38N = 20438
Ain_el_Abd_UTM_zone_39N = 20439
Ain_el_Abd_Bahrain_Grid = 20499
Afgooye_UTM_zone_38N = 20538
Afgooye_UTM_zone_39N = 20539
Lisbon_Portugese_Grid = 20700
Aratu_UTM_zone_22S = 20822
Aratu_UTM_zone_23S = 20823
Aratu_UTM_zone_24S = 20824
Arc_1950_Lo13 = 20973
Arc_1950_Lo15 = 20975
Arc_1950_Lo17 = 20977
Arc_1950_Lo19 = 20979
Arc_1950_Lo21 = 20981
Arc_1950_Lo23 = 20983
Arc_1950_Lo25 = 20985
Arc_1950_Lo27 = 20987
Arc_1950_Lo29 = 20989
Arc_1950_Lo31 = 20991
Arc_1950_Lo33 = 20993
Arc_1950_Lo35 = 20995
Batavia_NEIEZ = 21100
Batavia_UTM_zone_48S = 21148
Batavia_UTM_zone_49S = 21149
Batavia_UTM_zone_50S = 21150
Beijing_Gauss_zone_13 = 21413
Beijing_Gauss_zone_14 = 21414
Beijing_Gauss_zone_15 = 21415
Beijing_Gauss_zone_16 = 21416
Beijing_Gauss_zone_17 = 21417
Beijing_Gauss_zone_18 = 21418
Beijing_Gauss_zone_19 = 21419
Beijing_Gauss_zone_20 = 21420
Beijing_Gauss_zone_21 = 21421
Beijing_Gauss_zone_22 = 21422
Beijing_Gauss_zone_23 = 21423
Beijing_Gauss_13N = 21473
Beijing_Gauss_14N = 21474
Beijing_Gauss_15N = 21475
Beijing_Gauss_16N = 21476
Beijing_Gauss_17N = 21477
Beijing_Gauss_18N = 21478
Beijing_Gauss_19N = 21479
Beijing_Gauss_20N = 21480
Beijing_Gauss_21N = 21481
Beijing_Gauss_22N = 21482
Beijing_Gauss_23N = 21483
Belge_Lambert_50 = 21500
Bern_1898_Swiss_Old = 21790
Bogota_UTM_zone_17N = 21817
Bogota_UTM_zone_18N = 21818
Bogota_Colombia_3W = 21891
Bogota_Colombia_Bogota = 21892
Bogota_Colombia_3E = 21893
Bogota_Colombia_6E = 21894
Camacupa_UTM_32S = 22032
Camacupa_UTM_33S = 22033
C_Inchauspe_Argentina_1 = 22191
C_Inchauspe_Argentina_2 = 22192
C_Inchauspe_Argentina_3 = 22193
C_Inchauspe_Argentina_4 = 22194
C_Inchauspe_Argentina_5 = 22195
C_Inchauspe_Argentina_6 = 22196
C_Inchauspe_Argentina_7 = 22197
Carthage_UTM_zone_32N = 22332
Carthage_Nord_Tunisie = 22391
Carthage_Sud_Tunisie = 22392
Corrego_Alegre_UTM_23S = 22523
Corrego_Alegre_UTM_24S = 22524
Douala_UTM_zone_32N = 22832
Egypt_1907_Red_Belt = 22992
Egypt_1907_Purple_Belt = 22993
Egypt_1907_Ext_Purple = 22994
ED50_UTM_zone_28N = 23028
ED50_UTM_zone_29N = 23029
ED50_UTM_zone_30N = 23030
ED50_UTM_zone_31N = 23031
ED50_UTM_zone_32N = 23032
ED50_UTM_zone_33N = 23033
ED50_UTM_zone_34N = 23034
ED50_UTM_zone_35N = 23035
ED50_UTM_zone_36N = 23036
ED50_UTM_zone_37N = 23037
ED50_UTM_zone_38N = 23038
Fahud_UTM_zone_39N = 23239
Fahud_UTM_zone_40N = 23240
Garoua_UTM_zone_33N = 23433
ID74_UTM_zone_46N = 23846
ID74_UTM_zone_47N = 23847
ID74_UTM_zone_48N = 23848
ID74_UTM_zone_49N = 23849
ID74_UTM_zone_50N = 23850
ID74_UTM_zone_51N = 23851
ID74_UTM_zone_52N = 23852
ID74_UTM_zone_53N = 23853
ID74_UTM_zone_46S = 23886
ID74_UTM_zone_47S = 23887
ID74_UTM_zone_48S = 23888
ID74_UTM_zone_49S = 23889
ID74_UTM_zone_50S = 23890
ID74_UTM_zone_51S = 23891
ID74_UTM_zone_52S = 23892
ID74_UTM_zone_53S = 23893
ID74_UTM_zone_54S = 23894
Indian_1954_UTM_47N = 23947
Indian_1954_UTM_48N = 23948
Indian_1975_UTM_47N = 24047
Indian_1975_UTM_48N = 24048
Jamaica_1875_Old_Grid = 24100
JAD69_Jamaica_Grid = 24200
Kalianpur_India_0 = 24370
Kalianpur_India_I = 24371
Kalianpur_India_IIa = 24372
Kalianpur_India_IIIa = 24373
Kalianpur_India_IVa = 24374
Kalianpur_India_IIb = 24382
Kalianpur_India_IIIb = 24383
Kalianpur_India_IVb = 24384
Kertau_Singapore_Grid = 24500
Kertau_UTM_zone_47N = 24547
Kertau_UTM_zone_48N = 24548
La_Canoa_UTM_zone_20N = 24720
La_Canoa_UTM_zone_21N = 24721
PSAD56_UTM_zone_18N = 24818
PSAD56_UTM_zone_19N = 24819
PSAD56_UTM_zone_20N = 24820
PSAD56_UTM_zone_21N = 24821
PSAD56_UTM_zone_17S = 24877
PSAD56_UTM_zone_18S = 24878
PSAD56_UTM_zone_19S = 24879
PSAD56_UTM_zone_20S = 24880
PSAD56_Peru_west_zone = 24891
PSAD56_Peru_central = 24892
PSAD56_Peru_east_zone = 24893
Leigon_Ghana_Grid = 25000
Lome_UTM_zone_31N = 25231
Luzon_Philippines_I = 25391
Luzon_Philippines_II = 25392
Luzon_Philippines_III = 25393
Luzon_Philippines_IV = 25394
Luzon_Philippines_V = 25395
Makassar_NEIEZ = 25700
Malongo_1987_UTM_32S = 25932
Merchich_Nord_Maroc = 26191
Merchich_Sud_Maroc = 26192
Merchich_Sahara = 26193
Massawa_UTM_zone_37N = 26237
Minna_UTM_zone_31N = 26331
Minna_UTM_zone_32N = 26332
Minna_Nigeria_West = 26391
Minna_Nigeria_Mid_Belt = 26392
Minna_Nigeria_East = 26393
Mhast_UTM_zone_32S = 26432
Monte_Mario_Italy_1 = 26591
Monte_Mario_Italy_2 = 26592
M_poraloko_UTM_32N = 26632
M_poraloko_UTM_32S = 26692
NAD27_UTM_zone_3N = 26703
NAD27_UTM_zone_4N = 26704
NAD27_UTM_zone_5N = 26705
NAD27_UTM_zone_6N = 26706
NAD27_UTM_zone_7N = 26707
NAD27_UTM_zone_8N = 26708
NAD27_UTM_zone_9N = 26709
NAD27_UTM_zone_10N = 26710
NAD27_UTM_zone_11N = 26711
NAD27_UTM_zone_12N = 26712
NAD27_UTM_zone_13N = 26713
NAD27_UTM_zone_14N = 26714
NAD27_UTM_zone_15N = 26715
NAD27_UTM_zone_16N = 26716
NAD27_UTM_zone_17N = 26717
NAD27_UTM_zone_18N = 26718
NAD27_UTM_zone_19N = 26719
NAD27_UTM_zone_20N = 26720
NAD27_UTM_zone_21N = 26721
NAD27_UTM_zone_22N = 26722
NAD27_Alabama_East = 26729
NAD27_Alabama_West = 26730
NAD27_Alaska_zone_1 = 26731
NAD27_Alaska_zone_2 = 26732
NAD27_Alaska_zone_3 = 26733
NAD27_Alaska_zone_4 = 26734
NAD27_Alaska_zone_5 = 26735
NAD27_Alaska_zone_6 = 26736
NAD27_Alaska_zone_7 = 26737
NAD27_Alaska_zone_8 = 26738
NAD27_Alaska_zone_9 = 26739
NAD27_Alaska_zone_10 = 26740
NAD27_California_I = 26741
NAD27_California_II = 26742
NAD27_California_III = 26743
NAD27_California_IV = 26744
NAD27_California_V = 26745
NAD27_California_VI = 26746
NAD27_California_VII = 26747
NAD27_Arizona_East = 26748
NAD27_Arizona_Central = 26749
NAD27_Arizona_West = 26750
NAD27_Arkansas_North = 26751
NAD27_Arkansas_South = 26752
NAD27_Colorado_North = 26753
NAD27_Colorado_Central = 26754
NAD27_Colorado_South = 26755
NAD27_Connecticut = 26756
NAD27_Delaware = 26757
NAD27_Florida_East = 26758
NAD27_Florida_West = 26759
NAD27_Florida_North = 26760
NAD27_Hawaii_zone_1 = 26761
NAD27_Hawaii_zone_2 = 26762
NAD27_Hawaii_zone_3 = 26763
NAD27_Hawaii_zone_4 = 26764
NAD27_Hawaii_zone_5 = 26765
NAD27_Georgia_East = 26766
NAD27_Georgia_West = 26767
NAD27_Idaho_East = 26768
NAD27_Idaho_Central = 26769
NAD27_Idaho_West = 26770
NAD27_Illinois_East = 26771
NAD27_Illinois_West = 26772
NAD27_Indiana_East = 26773
NAD27_BLM_14N_feet = 26774
NAD27_Indiana_West = 26774
NAD27_BLM_15N_feet = 26775
NAD27_Iowa_North = 26775
NAD27_BLM_16N_feet = 26776
NAD27_Iowa_South = 26776
NAD27_BLM_17N_feet = 26777
NAD27_Kansas_North = 26777
NAD27_Kansas_South = 26778
NAD27_Kentucky_North = 26779
NAD27_Kentucky_South = 26780
NAD27_Louisiana_North = 26781
NAD27_Louisiana_South = 26782
NAD27_Maine_East = 26783
NAD27_Maine_West = 26784
NAD27_Maryland = 26785
NAD27_Massachusetts = 26786
NAD27_Massachusetts_Is = 26787
NAD27_Michigan_North = 26788
NAD27_Michigan_Central = 26789
NAD27_Michigan_South = 26790
NAD27_Minnesota_North = 26791
NAD27_Minnesota_Cent = 26792
NAD27_Minnesota_South = 26793
NAD27_Mississippi_East = 26794
NAD27_Mississippi_West = 26795
NAD27_Missouri_East = 26796
NAD27_Missouri_Central = 26797
NAD27_Missouri_West = 26798
NAD_Michigan_Michigan_East = 26801
NAD_Michigan_Michigan_Old_Central = 26802
NAD_Michigan_Michigan_West = 26803
NAD83_UTM_zone_3N = 26903
NAD83_UTM_zone_4N = 26904
NAD83_UTM_zone_5N = 26905
NAD83_UTM_zone_6N = 26906
NAD83_UTM_zone_7N = 26907
NAD83_UTM_zone_8N = 26908
NAD83_UTM_zone_9N = 26909
NAD83_UTM_zone_10N = 26910
NAD83_UTM_zone_11N = 26911
NAD83_UTM_zone_12N = 26912
NAD83_UTM_zone_13N = 26913
NAD83_UTM_zone_14N = 26914
NAD83_UTM_zone_15N = 26915
NAD83_UTM_zone_16N = 26916
NAD83_UTM_zone_17N = 26917
NAD83_UTM_zone_18N = 26918
NAD83_UTM_zone_19N = 26919
NAD83_UTM_zone_20N = 26920
NAD83_UTM_zone_21N = 26921
NAD83_UTM_zone_22N = 26922
NAD83_UTM_zone_23N = 26923
NAD83_Alabama_East = 26929
NAD83_Alabama_West = 26930
NAD83_Alaska_zone_1 = 26931
NAD83_Alaska_zone_2 = 26932
NAD83_Alaska_zone_3 = 26933
NAD83_Alaska_zone_4 = 26934
NAD83_Alaska_zone_5 = 26935
NAD83_Alaska_zone_6 = 26936
NAD83_Alaska_zone_7 = 26937
NAD83_Alaska_zone_8 = 26938
NAD83_Alaska_zone_9 = 26939
NAD83_Alaska_zone_10 = 26940
NAD83_California_1 = 26941
NAD83_California_2 = 26942
NAD83_California_3 = 26943
NAD83_California_4 = 26944
NAD83_California_5 = 26945
NAD83_California_6 = 26946
NAD83_Arizona_East = 26948
NAD83_Arizona_Central = 26949
NAD83_Arizona_West = 26950
NAD83_Arkansas_North = 26951
NAD83_Arkansas_South = 26952
NAD83_Colorado_North = 26953
NAD83_Colorado_Central = 26954
NAD83_Colorado_South = 26955
NAD83_Connecticut = 26956
NAD83_Delaware = 26957
NAD83_Florida_East = 26958
NAD83_Florida_West = 26959
NAD83_Florida_North = 26960
NAD83_Hawaii_zone_1 = 26961
NAD83_Hawaii_zone_2 = 26962
NAD83_Hawaii_zone_3 = 26963
NAD83_Hawaii_zone_4 = 26964
NAD83_Hawaii_zone_5 = 26965
NAD83_Georgia_East = 26966
NAD83_Georgia_West = 26967
NAD83_Idaho_East = 26968
NAD83_Idaho_Central = 26969
NAD83_Idaho_West = 26970
NAD83_Illinois_East = 26971
NAD83_Illinois_West = 26972
NAD83_Indiana_East = 26973
NAD83_Indiana_West = 26974
NAD83_Iowa_North = 26975
NAD83_Iowa_South = 26976
NAD83_Kansas_North = 26977
NAD83_Kansas_South = 26978
NAD83_Kentucky_North = 2205
NAD83_Kentucky_South = 26980
NAD83_Louisiana_North = 26981
NAD83_Louisiana_South = 26982
NAD83_Maine_East = 26983
NAD83_Maine_West = 26984
NAD83_Maryland = 26985
NAD83_Massachusetts = 26986
NAD83_Massachusetts_Is = 26987
NAD83_Michigan_North = 26988
NAD83_Michigan_Central = 26989
NAD83_Michigan_South = 26990
NAD83_Minnesota_North = 26991
NAD83_Minnesota_Cent = 26992
NAD83_Minnesota_South = 26993
NAD83_Mississippi_East = 26994
NAD83_Mississippi_West = 26995
NAD83_Missouri_East = 26996
NAD83_Missouri_Central = 26997
NAD83_Missouri_West = 26998
Nahrwan_1967_UTM_38N = 27038
Nahrwan_1967_UTM_39N = 27039
Nahrwan_1967_UTM_40N = 27040
Naparima_UTM_20N = 27120
GD49_NZ_Map_Grid = 27200
GD49_North_Island_Grid = 27291
GD49_South_Island_Grid = 27292
Datum_73_UTM_zone_29N = 27429
ATF_Nord_de_Guerre = 27500
NTF_France_I = 27581
NTF_France_II = 27582
NTF_France_III = 27583
NTF_Nord_France = 27591
NTF_Centre_France = 27592
NTF_Sud_France = 27593
British_National_Grid = 27700
Point_Noire_UTM_32S = 28232
GDA94_MGA_zone_48 = 28348
GDA94_MGA_zone_49 = 28349
GDA94_MGA_zone_50 = 28350
GDA94_MGA_zone_51 = 28351
GDA94_MGA_zone_52 = 28352
GDA94_MGA_zone_53 = 28353
GDA94_MGA_zone_54 = 28354
GDA94_MGA_zone_55 = 28355
GDA94_MGA_zone_56 = 28356
GDA94_MGA_zone_57 = 28357
GDA94_MGA_zone_58 = 28358
Pulkovo_Gauss_zone_4 = 28404
Pulkovo_Gauss_zone_5 = 28405
Pulkovo_Gauss_zone_6 = 28406
Pulkovo_Gauss_zone_7 = 28407
Pulkovo_Gauss_zone_8 = 28408
Pulkovo_Gauss_zone_9 = 28409
Pulkovo_Gauss_zone_10 = 28410
Pulkovo_Gauss_zone_11 = 28411
Pulkovo_Gauss_zone_12 = 28412
Pulkovo_Gauss_zone_13 = 28413
Pulkovo_Gauss_zone_14 = 28414
Pulkovo_Gauss_zone_15 = 28415
Pulkovo_Gauss_zone_16 = 28416
Pulkovo_Gauss_zone_17 = 28417
Pulkovo_Gauss_zone_18 = 28418
Pulkovo_Gauss_zone_19 = 28419
Pulkovo_Gauss_zone_20 = 28420
Pulkovo_Gauss_zone_21 = 28421
Pulkovo_Gauss_zone_22 = 28422
Pulkovo_Gauss_zone_23 = 28423
Pulkovo_Gauss_zone_24 = 28424
Pulkovo_Gauss_zone_25 = 28425
Pulkovo_Gauss_zone_26 = 28426
Pulkovo_Gauss_zone_27 = 28427
Pulkovo_Gauss_zone_28 = 28428
Pulkovo_Gauss_zone_29 = 28429
Pulkovo_Gauss_zone_30 = 28430
Pulkovo_Gauss_zone_31 = 28431
Pulkovo_Gauss_zone_32 = 28432
Pulkovo_Gauss_4N = 28464
Pulkovo_Gauss_5N = 28465
Pulkovo_Gauss_6N = 28466
Pulkovo_Gauss_7N = 28467
Pulkovo_Gauss_8N = 28468
Pulkovo_Gauss_9N = 28469
Pulkovo_Gauss_10N = 28470
Pulkovo_Gauss_11N = 28471
Pulkovo_Gauss_12N = 28472
Pulkovo_Gauss_13N = 28473
Pulkovo_Gauss_14N = 28474
Pulkovo_Gauss_15N = 28475
Pulkovo_Gauss_16N = 28476
Pulkovo_Gauss_17N = 28477
Pulkovo_Gauss_18N = 28478
Pulkovo_Gauss_19N = 28479
Pulkovo_Gauss_20N = 28480
Pulkovo_Gauss_21N = 28481
Pulkovo_Gauss_22N = 28482
Pulkovo_Gauss_23N = 28483
Pulkovo_Gauss_24N = 28484
Pulkovo_Gauss_25N = 28485
Pulkovo_Gauss_26N = 28486
Pulkovo_Gauss_27N = 28487
Pulkovo_Gauss_28N = 28488
Pulkovo_Gauss_29N = 28489
Pulkovo_Gauss_30N = 28490
Pulkovo_Gauss_31N = 28491
Pulkovo_Gauss_32N = 28492
Qatar_National_Grid = 28600
RD_Netherlands_Old = 28991
RD_Netherlands_New = 28992
SAD69_UTM_zone_18N = 29118
SAD69_UTM_zone_19N = 29119
SAD69_UTM_zone_20N = 29120
SAD69_UTM_zone_21N = 29121
SAD69_UTM_zone_22N = 29122
SAD69_UTM_zone_17S = 29177
SAD69_UTM_zone_18S = 29178
SAD69_UTM_zone_19S = 29179
SAD69_UTM_zone_20S = 29180
SAD69_UTM_zone_21S = 29181
SAD69_UTM_zone_22S = 29182
SAD69_UTM_zone_23S = 29183
SAD69_UTM_zone_24S = 29184
SAD69_UTM_zone_25S = 29185
Sapper_Hill_UTM_20S = 29220
Sapper_Hill_UTM_21S = 29221
Schwarzeck_UTM_33S = 29333
Sudan_UTM_zone_35N = 29635
Sudan_UTM_zone_36N = 29636
Tananarive_Laborde = 29700
Tananarive_UTM_38S = 29738
Tananarive_UTM_39S = 29739
Timbalai_1948_Borneo = 29800
Timbalai_1948_UTM_49N = 29849
Timbalai_1948_UTM_50N = 29850
TM65_Irish_Nat_Grid = 29900
Trinidad_1903_Trinidad = 30200
TC_1948_UTM_zone_39N = 30339
TC_1948_UTM_zone_40N = 30340
Voirol_N_Algerie_ancien = 30491
Voirol_S_Algerie_ancien = 30492
Voirol_Unifie_N_Algerie = 30591
Voirol_Unifie_S_Algerie = 30592
Bern_1938_Swiss_New = 30600
Nord_Sahara_UTM_29N = 30729
Nord_Sahara_UTM_30N = 30730
Nord_Sahara_UTM_31N = 30731
Nord_Sahara_UTM_32N = 30732
Yoff_UTM_zone_28N = 31028
Zanderij_UTM_zone_21N = 31121
MGI_Austria_West = 31291
MGI_Austria_Central = 31292
MGI_Austria_East = 31293
Belge_Lambert_72 = 31300
DHDN_Germany_zone_1 = 31491
DHDN_Germany_zone_2 = 31492
DHDN_Germany_zone_3 = 31493
DHDN_Germany_zone_4 = 31494
DHDN_Germany_zone_5 = 31495
NAD27_Montana_North = 32001
NAD27_Montana_Central = 32002
NAD27_Montana_South = 32003
NAD27_Nebraska_North = 32005
NAD27_Nebraska_South = 32006
NAD27_Nevada_East = 32007
NAD27_Nevada_Central = 32008
NAD27_Nevada_West = 32009
NAD27_New_Hampshire = 32010
NAD27_New_Jersey = 32011
NAD27_New_Mexico_East = 32012
NAD27_New_Mexico_Cent = 32013
NAD27_New_Mexico_West = 32014
NAD27_New_York_East = 32015
NAD27_New_York_Central = 32016
NAD27_New_York_West = 32017
NAD27_New_York_Long_Is = 32018
NAD27_North_Carolina = 32019
NAD27_North_Dakota_N = 32020
NAD27_North_Dakota_S = 32021
NAD27_Ohio_North = 32022
NAD27_Ohio_South = 32023
NAD27_Oklahoma_North = 32024
NAD27_Oklahoma_South = 32025
NAD27_Oregon_North = 32026
NAD27_Oregon_South = 32027
NAD27_Pennsylvania_N = 32028
NAD27_Pennsylvania_S = 32029
NAD27_Rhode_Island = 32030
NAD27_South_Carolina_N = 32031
NAD27_South_Carolina_S = 32033
NAD27_South_Dakota_N = 32034
NAD27_South_Dakota_S = 32035
NAD27_Tennessee = 2204
NAD27_Texas_North = 32037
NAD27_Texas_North_Cen = 32038
NAD27_Texas_Central = 32039
NAD27_Texas_South_Cen = 32040
NAD27_Texas_South = 32041
NAD27_Utah_North = 32042
NAD27_Utah_Central = 32043
NAD27_Utah_South = 32044
NAD27_Vermont = 32045
NAD27_Virginia_North = 32046
NAD27_Virginia_South = 32047
NAD27_Washington_North = 32048
NAD27_Washington_South = 32049
NAD27_West_Virginia_N = 32050
NAD27_West_Virginia_S = 32051
NAD27_Wisconsin_North = 32052
NAD27_Wisconsin_Cen = 32053
NAD27_Wisconsin_South = 32054
NAD27_Wyoming_East = 32055
NAD27_Wyoming_E_Cen = 32056
NAD27_Wyoming_W_Cen = 32057
NAD27_Wyoming_West = 32058
NAD27_Puerto_Rico = 32059
NAD27_St_Croix = 32060
NAD83_Montana = 32100
NAD83_Nebraska = 32104
NAD83_Nevada_East = 32107
NAD83_Nevada_Central = 32108
NAD83_Nevada_West = 32109
NAD83_New_Hampshire = 32110
NAD83_New_Jersey = 32111
NAD83_New_Mexico_East = 32112
NAD83_New_Mexico_Cent = 32113
NAD83_New_Mexico_West = 32114
NAD83_New_York_East = 32115
NAD83_New_York_Central = 32116
NAD83_New_York_West = 32117
NAD83_New_York_Long_Is = 32118
NAD83_North_Carolina = 32119
NAD83_North_Dakota_N = 32120
NAD83_North_Dakota_S = 32121
NAD83_Ohio_North = 32122
NAD83_Ohio_South = 32123
NAD83_Oklahoma_North = 32124
NAD83_Oklahoma_South = 32125
NAD83_Oregon_North = 32126
NAD83_Oregon_South = 32127
NAD83_Pennsylvania_N = 32128
NAD83_Pennsylvania_S = 32129
NAD83_Rhode_Island = 32130
NAD83_South_Carolina = 32133
NAD83_South_Dakota_N = 32134
NAD83_South_Dakota_S = 32135
NAD83_Tennessee = 32136
NAD83_Texas_North = 32137
NAD83_Texas_North_Cen = 32138
NAD83_Texas_Central = 32139
NAD83_Texas_South_Cen = 32140
NAD83_Texas_South = 32141
NAD83_Utah_North = 32142
NAD83_Utah_Central = 32143
NAD83_Utah_South = 32144
NAD83_Vermont = 32145
NAD83_Virginia_North = 32146
NAD83_Virginia_South = 32147
NAD83_Washington_North = 32148
NAD83_Washington_South = 32149
NAD83_West_Virginia_N = 32150
NAD83_West_Virginia_S = 32151
NAD83_Wisconsin_North = 32152
NAD83_Wisconsin_Cen = 32153
NAD83_Wisconsin_South = 32154
NAD83_Wyoming_East = 32155
NAD83_Wyoming_E_Cen = 32156
NAD83_Wyoming_W_Cen = 32157
NAD83_Wyoming_West = 32158
NAD83_Puerto_Rico_Virgin_Is = 32161
WGS72_UTM_zone_1N = 32201
WGS72_UTM_zone_2N = 32202
WGS72_UTM_zone_3N = 32203
WGS72_UTM_zone_4N = 32204
WGS72_UTM_zone_5N = 32205
WGS72_UTM_zone_6N = 32206
WGS72_UTM_zone_7N = 32207
WGS72_UTM_zone_8N = 32208
WGS72_UTM_zone_9N = 32209
WGS72_UTM_zone_10N = 32210
WGS72_UTM_zone_11N = 32211
WGS72_UTM_zone_12N = 32212
WGS72_UTM_zone_13N = 32213
WGS72_UTM_zone_14N = 32214
WGS72_UTM_zone_15N = 32215
WGS72_UTM_zone_16N = 32216
WGS72_UTM_zone_17N = 32217
WGS72_UTM_zone_18N = 32218
WGS72_UTM_zone_19N = 32219
WGS72_UTM_zone_20N = 32220
WGS72_UTM_zone_21N = 32221
WGS72_UTM_zone_22N = 32222
WGS72_UTM_zone_23N = 32223
WGS72_UTM_zone_24N = 32224
WGS72_UTM_zone_25N = 32225
WGS72_UTM_zone_26N = 32226
WGS72_UTM_zone_27N = 32227
WGS72_UTM_zone_28N = 32228
WGS72_UTM_zone_29N = 32229
WGS72_UTM_zone_30N = 32230
WGS72_UTM_zone_31N = 32231
WGS72_UTM_zone_32N = 32232
WGS72_UTM_zone_33N = 32233
WGS72_UTM_zone_34N = 32234
WGS72_UTM_zone_35N = 32235
WGS72_UTM_zone_36N = 32236
WGS72_UTM_zone_37N = 32237
WGS72_UTM_zone_38N = 32238
WGS72_UTM_zone_39N = 32239
WGS72_UTM_zone_40N = 32240
WGS72_UTM_zone_41N = 32241
WGS72_UTM_zone_42N = 32242
WGS72_UTM_zone_43N = 32243
WGS72_UTM_zone_44N = 32244
WGS72_UTM_zone_45N = 32245
WGS72_UTM_zone_46N = 32246
WGS72_UTM_zone_47N = 32247
WGS72_UTM_zone_48N = 32248
WGS72_UTM_zone_49N = 32249
WGS72_UTM_zone_50N = 32250
WGS72_UTM_zone_51N = 32251
WGS72_UTM_zone_52N = 32252
WGS72_UTM_zone_53N = 32253
WGS72_UTM_zone_54N = 32254
WGS72_UTM_zone_55N = 32255
WGS72_UTM_zone_56N = 32256
WGS72_UTM_zone_57N = 32257
WGS72_UTM_zone_58N = 32258
WGS72_UTM_zone_59N = 32259
WGS72_UTM_zone_60N = 32260
WGS72_UTM_zone_1S = 32301
WGS72_UTM_zone_2S = 32302
WGS72_UTM_zone_3S = 32303
WGS72_UTM_zone_4S = 32304
WGS72_UTM_zone_5S = 32305
WGS72_UTM_zone_6S = 32306
WGS72_UTM_zone_7S = 32307
WGS72_UTM_zone_8S = 32308
WGS72_UTM_zone_9S = 32309
WGS72_UTM_zone_10S = 32310
WGS72_UTM_zone_11S = 32311
WGS72_UTM_zone_12S = 32312
WGS72_UTM_zone_13S = 32313
WGS72_UTM_zone_14S = 32314
WGS72_UTM_zone_15S = 32315
WGS72_UTM_zone_16S = 32316
WGS72_UTM_zone_17S = 32317
WGS72_UTM_zone_18S = 32318
WGS72_UTM_zone_19S = 32319
WGS72_UTM_zone_20S = 32320
WGS72_UTM_zone_21S = 32321
WGS72_UTM_zone_22S = 32322
WGS72_UTM_zone_23S = 32323
WGS72_UTM_zone_24S = 32324
WGS72_UTM_zone_25S = 32325
WGS72_UTM_zone_26S = 32326
WGS72_UTM_zone_27S = 32327
WGS72_UTM_zone_28S = 32328
WGS72_UTM_zone_29S = 32329
WGS72_UTM_zone_30S = 32330
WGS72_UTM_zone_31S = 32331
WGS72_UTM_zone_32S = 32332
WGS72_UTM_zone_33S = 32333
WGS72_UTM_zone_34S = 32334
WGS72_UTM_zone_35S = 32335
WGS72_UTM_zone_36S = 32336
WGS72_UTM_zone_37S = 32337
WGS72_UTM_zone_38S = 32338
WGS72_UTM_zone_39S = 32339
WGS72_UTM_zone_40S = 32340
WGS72_UTM_zone_41S = 32341
WGS72_UTM_zone_42S = 32342
WGS72_UTM_zone_43S = 32343
WGS72_UTM_zone_44S = 32344
WGS72_UTM_zone_45S = 32345
WGS72_UTM_zone_46S = 32346
WGS72_UTM_zone_47S = 32347
WGS72_UTM_zone_48S = 32348
WGS72_UTM_zone_49S = 32349
WGS72_UTM_zone_50S = 32350
WGS72_UTM_zone_51S = 32351
WGS72_UTM_zone_52S = 32352
WGS72_UTM_zone_53S = 32353
WGS72_UTM_zone_54S = 32354
WGS72_UTM_zone_55S = 32355
WGS72_UTM_zone_56S = 32356
WGS72_UTM_zone_57S = 32357
WGS72_UTM_zone_58S = 32358
WGS72_UTM_zone_59S = 32359
WGS72_UTM_zone_60S = 32360
WGS72BE_UTM_zone_1N = 32401
WGS72BE_UTM_zone_2N = 32402
WGS72BE_UTM_zone_3N = 32403
WGS72BE_UTM_zone_4N = 32404
WGS72BE_UTM_zone_5N = 32405
WGS72BE_UTM_zone_6N = 32406
WGS72BE_UTM_zone_7N = 32407
WGS72BE_UTM_zone_8N = 32408
WGS72BE_UTM_zone_9N = 32409
WGS72BE_UTM_zone_10N = 32410
WGS72BE_UTM_zone_11N = 32411
WGS72BE_UTM_zone_12N = 32412
WGS72BE_UTM_zone_13N = 32413
WGS72BE_UTM_zone_14N = 32414
WGS72BE_UTM_zone_15N = 32415
WGS72BE_UTM_zone_16N = 32416
WGS72BE_UTM_zone_17N = 32417
WGS72BE_UTM_zone_18N = 32418
WGS72BE_UTM_zone_19N = 32419
WGS72BE_UTM_zone_20N = 32420
WGS72BE_UTM_zone_21N = 32421
WGS72BE_UTM_zone_22N = 32422
WGS72BE_UTM_zone_23N = 32423
WGS72BE_UTM_zone_24N = 32424
WGS72BE_UTM_zone_25N = 32425
WGS72BE_UTM_zone_26N = 32426
WGS72BE_UTM_zone_27N = 32427
WGS72BE_UTM_zone_28N = 32428
WGS72BE_UTM_zone_29N = 32429
WGS72BE_UTM_zone_30N = 32430
WGS72BE_UTM_zone_31N = 32431
WGS72BE_UTM_zone_32N = 32432
WGS72BE_UTM_zone_33N = 32433
WGS72BE_UTM_zone_34N = 32434
WGS72BE_UTM_zone_35N = 32435
WGS72BE_UTM_zone_36N = 32436
WGS72BE_UTM_zone_37N = 32437
WGS72BE_UTM_zone_38N = 32438
WGS72BE_UTM_zone_39N = 32439
WGS72BE_UTM_zone_40N = 32440
WGS72BE_UTM_zone_41N = 32441
WGS72BE_UTM_zone_42N = 32442
WGS72BE_UTM_zone_43N = 32443
WGS72BE_UTM_zone_44N = 32444
WGS72BE_UTM_zone_45N = 32445
WGS72BE_UTM_zone_46N = 32446
WGS72BE_UTM_zone_47N = 32447
WGS72BE_UTM_zone_48N = 32448
WGS72BE_UTM_zone_49N = 32449
WGS72BE_UTM_zone_50N = 32450
WGS72BE_UTM_zone_51N = 32451
WGS72BE_UTM_zone_52N = 32452
WGS72BE_UTM_zone_53N = 32453
WGS72BE_UTM_zone_54N = 32454
WGS72BE_UTM_zone_55N = 32455
WGS72BE_UTM_zone_56N = 32456
WGS72BE_UTM_zone_57N = 32457
WGS72BE_UTM_zone_58N = 32458
WGS72BE_UTM_zone_59N = 32459
WGS72BE_UTM_zone_60N = 32460
WGS72BE_UTM_zone_1S = 32501
WGS72BE_UTM_zone_2S = 32502
WGS72BE_UTM_zone_3S = 32503
WGS72BE_UTM_zone_4S = 32504
WGS72BE_UTM_zone_5S = 32505
WGS72BE_UTM_zone_6S = 32506
WGS72BE_UTM_zone_7S = 32507
WGS72BE_UTM_zone_8S = 32508
WGS72BE_UTM_zone_9S = 32509
WGS72BE_UTM_zone_10S = 32510
WGS72BE_UTM_zone_11S = 32511
WGS72BE_UTM_zone_12S = 32512
WGS72BE_UTM_zone_13S = 32513
WGS72BE_UTM_zone_14S = 32514
WGS72BE_UTM_zone_15S = 32515
WGS72BE_UTM_zone_16S = 32516
WGS72BE_UTM_zone_17S = 32517
WGS72BE_UTM_zone_18S = 32518
WGS72BE_UTM_zone_19S = 32519
WGS72BE_UTM_zone_20S = 32520
WGS72BE_UTM_zone_21S = 32521
WGS72BE_UTM_zone_22S = 32522
WGS72BE_UTM_zone_23S = 32523
WGS72BE_UTM_zone_24S = 32524
WGS72BE_UTM_zone_25S = 32525
WGS72BE_UTM_zone_26S = 32526
WGS72BE_UTM_zone_27S = 32527
WGS72BE_UTM_zone_28S = 32528
WGS72BE_UTM_zone_29S = 32529
WGS72BE_UTM_zone_30S = 32530
WGS72BE_UTM_zone_31S = 32531
WGS72BE_UTM_zone_32S = 32532
WGS72BE_UTM_zone_33S = 32533
WGS72BE_UTM_zone_34S = 32534
WGS72BE_UTM_zone_35S = 32535
WGS72BE_UTM_zone_36S = 32536
WGS72BE_UTM_zone_37S = 32537
WGS72BE_UTM_zone_38S = 32538
WGS72BE_UTM_zone_39S = 32539
WGS72BE_UTM_zone_40S = 32540
WGS72BE_UTM_zone_41S = 32541
WGS72BE_UTM_zone_42S = 32542
WGS72BE_UTM_zone_43S = 32543
WGS72BE_UTM_zone_44S = 32544
WGS72BE_UTM_zone_45S = 32545
WGS72BE_UTM_zone_46S = 32546
WGS72BE_UTM_zone_47S = 32547
WGS72BE_UTM_zone_48S = 32548
WGS72BE_UTM_zone_49S = 32549
WGS72BE_UTM_zone_50S = 32550
WGS72BE_UTM_zone_51S = 32551
WGS72BE_UTM_zone_52S = 32552
WGS72BE_UTM_zone_53S = 32553
WGS72BE_UTM_zone_54S = 32554
WGS72BE_UTM_zone_55S = 32555
WGS72BE_UTM_zone_56S = 32556
WGS72BE_UTM_zone_57S = 32557
WGS72BE_UTM_zone_58S = 32558
WGS72BE_UTM_zone_59S = 32559
WGS72BE_UTM_zone_60S = 32560
WGS84_UTM_zone_1N = 32601
WGS84_UTM_zone_2N = 32602
WGS84_UTM_zone_3N = 32603
WGS84_UTM_zone_4N = 32604
WGS84_UTM_zone_5N = 32605
WGS84_UTM_zone_6N = 32606
WGS84_UTM_zone_7N = 32607
WGS84_UTM_zone_8N = 32608
WGS84_UTM_zone_9N = 32609
WGS84_UTM_zone_10N = 32610
WGS84_UTM_zone_11N = 32611
WGS84_UTM_zone_12N = 32612
WGS84_UTM_zone_13N = 32613
WGS84_UTM_zone_14N = 32614
WGS84_UTM_zone_15N = 32615
WGS84_UTM_zone_16N = 32616
WGS84_UTM_zone_17N = 32617
WGS84_UTM_zone_18N = 32618
WGS84_UTM_zone_19N = 32619
WGS84_UTM_zone_20N = 32620
WGS84_UTM_zone_21N = 32621
WGS84_UTM_zone_22N = 32622
WGS84_UTM_zone_23N = 32623
WGS84_UTM_zone_24N = 32624
WGS84_UTM_zone_25N = 32625
WGS84_UTM_zone_26N = 32626
WGS84_UTM_zone_27N = 32627
WGS84_UTM_zone_28N = 32628
WGS84_UTM_zone_29N = 32629
WGS84_UTM_zone_30N = 32630
WGS84_UTM_zone_31N = 32631
WGS84_UTM_zone_32N = 32632
WGS84_UTM_zone_33N = 32633
WGS84_UTM_zone_34N = 32634
WGS84_UTM_zone_35N = 32635
WGS84_UTM_zone_36N = 32636
WGS84_UTM_zone_37N = 32637
WGS84_UTM_zone_38N = 32638
WGS84_UTM_zone_39N = 32639
WGS84_UTM_zone_40N = 32640
WGS84_UTM_zone_41N = 32641
WGS84_UTM_zone_42N = 32642
WGS84_UTM_zone_43N = 32643
WGS84_UTM_zone_44N = 32644
WGS84_UTM_zone_45N = 32645
WGS84_UTM_zone_46N = 32646
WGS84_UTM_zone_47N = 32647
WGS84_UTM_zone_48N = 32648
WGS84_UTM_zone_49N = 32649
WGS84_UTM_zone_50N = 32650
WGS84_UTM_zone_51N = 32651
WGS84_UTM_zone_52N = 32652
WGS84_UTM_zone_53N = 32653
WGS84_UTM_zone_54N = 32654
WGS84_UTM_zone_55N = 32655
WGS84_UTM_zone_56N = 32656
WGS84_UTM_zone_57N = 32657
WGS84_UTM_zone_58N = 32658
WGS84_UTM_zone_59N = 32659
WGS84_UTM_zone_60N = 32660
WGS84_UTM_zone_1S = 32701
WGS84_UTM_zone_2S = 32702
WGS84_UTM_zone_3S = 32703
WGS84_UTM_zone_4S = 32704
WGS84_UTM_zone_5S = 32705
WGS84_UTM_zone_6S = 32706
WGS84_UTM_zone_7S = 32707
WGS84_UTM_zone_8S = 32708
WGS84_UTM_zone_9S = 32709
WGS84_UTM_zone_10S = 32710
WGS84_UTM_zone_11S = 32711
WGS84_UTM_zone_12S = 32712
WGS84_UTM_zone_13S = 32713
WGS84_UTM_zone_14S = 32714
WGS84_UTM_zone_15S = 32715
WGS84_UTM_zone_16S = 32716
WGS84_UTM_zone_17S = 32717
WGS84_UTM_zone_18S = 32718
WGS84_UTM_zone_19S = 32719
WGS84_UTM_zone_20S = 32720
WGS84_UTM_zone_21S = 32721
WGS84_UTM_zone_22S = 32722
WGS84_UTM_zone_23S = 32723
WGS84_UTM_zone_24S = 32724
WGS84_UTM_zone_25S = 32725
WGS84_UTM_zone_26S = 32726
WGS84_UTM_zone_27S = 32727
WGS84_UTM_zone_28S = 32728
WGS84_UTM_zone_29S = 32729
WGS84_UTM_zone_30S = 32730
WGS84_UTM_zone_31S = 32731
WGS84_UTM_zone_32S = 32732
WGS84_UTM_zone_33S = 32733
WGS84_UTM_zone_34S = 32734
WGS84_UTM_zone_35S = 32735
WGS84_UTM_zone_36S = 32736
WGS84_UTM_zone_37S = 32737
WGS84_UTM_zone_38S = 32738
WGS84_UTM_zone_39S = 32739
WGS84_UTM_zone_40S = 32740
WGS84_UTM_zone_41S = 32741
WGS84_UTM_zone_42S = 32742
WGS84_UTM_zone_43S = 32743
WGS84_UTM_zone_44S = 32744
WGS84_UTM_zone_45S = 32745
WGS84_UTM_zone_46S = 32746
WGS84_UTM_zone_47S = 32747
WGS84_UTM_zone_48S = 32748
WGS84_UTM_zone_49S = 32749
WGS84_UTM_zone_50S = 32750
WGS84_UTM_zone_51S = 32751
WGS84_UTM_zone_52S = 32752
WGS84_UTM_zone_53S = 32753
WGS84_UTM_zone_54S = 32754
WGS84_UTM_zone_55S = 32755
WGS84_UTM_zone_56S = 32756
WGS84_UTM_zone_57S = 32757
WGS84_UTM_zone_58S = 32758
WGS84_UTM_zone_59S = 32759
WGS84_UTM_zone_60S = 32760
# New
GGRS87_Greek_Grid = 2100
KKJ_Finland_zone_1 = 2391
KKJ_Finland_zone_2 = 2392
KKJ_Finland_zone_3 = 2393
KKJ_Finland_zone_4 = 2394
RT90_2_5_gon_W = 2400
Lietuvos_Koordinoei_Sistema_1994 = 2600
Estonian_Coordinate_System_of_1992 = 3300
HD72_EOV = 23700
Dealul_Piscului_1970_Stereo_70 = 31700
# Newer
Hjorsey_1955_Lambert = 3053
ISN93_Lambert_1993 = 3057
ETRS89_Poland_CS2000_zone_5 = 2176
ETRS89_Poland_CS2000_zone_6 = 2177
ETRS89_Poland_CS2000_zone_7 = 2177
ETRS89_Poland_CS2000_zone_8 = 2178
ETRS89_Poland_CS92 = 2180
class GCSE(enum.IntEnum):
"""Unspecified GCS based on ellipsoid."""
Undefined = 0
User_Defined = 32767
Airy1830 = 4001
AiryModified1849 = 4002
AustralianNationalSpheroid = 4003
Bessel1841 = 4004
BesselModified = 4005
BesselNamibia = 4006
Clarke1858 = 4007
Clarke1866 = 4008
Clarke1866Michigan = 4009
Clarke1880_Benoit = 4010
Clarke1880_IGN = 4011
Clarke1880_RGS = 4012
Clarke1880_Arc = 4013
Clarke1880_SGA1922 = 4014
Everest1830_1937Adjustment = 4015
Everest1830_1967Definition = 4016
Everest1830_1975Definition = 4017
Everest1830Modified = 4018
GRS1980 = 4019
Helmert1906 = 4020
IndonesianNationalSpheroid = 4021
International1924 = 4022
International1967 = 4023
Krassowsky1940 = 4024
NWL9D = 4025
NWL10D = 4026
Plessis1817 = 4027
Struve1860 = 4028
WarOffice = 4029
WGS84 = 4030
GEM10C = 4031
OSU86F = 4032
OSU91A = 4033
Clarke1880 = 4034
Sphere = 4035
class GCS(enum.IntEnum):
"""Geographic CS Type Codes."""
Undefined = 0
User_Defined = 32767
Adindan = 4201
AGD66 = 4202
AGD84 = 4203
Ain_el_Abd = 4204
Afgooye = 4205
Agadez = 4206
Lisbon = 4207
Aratu = 4208
Arc_1950 = 4209
Arc_1960 = 4210
Batavia = 4211
Barbados = 4212
Beduaram = 4213
Beijing_1954 = 4214
Belge_1950 = 4215
Bermuda_1957 = 4216
Bern_1898 = 4217
Bogota = 4218
Bukit_Rimpah = 4219
Camacupa = 4220
Campo_Inchauspe = 4221
Cape = 4222
Carthage = 4223
Chua = 4224
Corrego_Alegre = 4225
Cote_d_Ivoire = 4226
Deir_ez_Zor = 4227
Douala = 4228
Egypt_1907 = 4229
ED50 = 4230
ED87 = 4231
Fahud = 4232
Gandajika_1970 = 4233
Garoua = 4234
Guyane_Francaise = 4235
Hu_Tzu_Shan = 4236
HD72 = 4237
ID74 = 4238
Indian_1954 = 4239
Indian_1975 = 4240
Jamaica_1875 = 4241
JAD69 = 4242
Kalianpur = 4243
Kandawala = 4244
Kertau = 4245
KOC = 4246
La_Canoa = 4247
PSAD56 = 4248
Lake = 4249
Leigon = 4250
Liberia_1964 = 4251
Lome = 4252
Luzon_1911 = 4253
Hito_XVIII_1963 = 4254
Herat_North = 4255
Mahe_1971 = 4256
Makassar = 4257
EUREF89 = 4258
Malongo_1987 = 4259
Manoca = 4260
Merchich = 4261
Massawa = 4262
Minna = 4263
Mhast = 4264
Monte_Mario = 4265
M_poraloko = 4266
NAD27 = 4267
NAD_Michigan = 4268
NAD83 = 4269
Nahrwan_1967 = 4270
Naparima_1972 = 4271
GD49 = 4272
NGO_1948 = 4273
Datum_73 = 4274
NTF = 4275
NSWC_9Z_2 = 4276
OSGB_1936 = 4277
OSGB70 = 4278
OS_SN80 = 4279
Padang = 4280
Palestine_1923 = 4281
Pointe_Noire = 4282
GDA94 = 4283
Pulkovo_1942 = 4284
Qatar = 4285
Qatar_1948 = 4286
Qornoq = 4287
Loma_Quintana = 4288
Amersfoort = 4289
RT38 = 4290
SAD69 = 4291
Sapper_Hill_1943 = 4292
Schwarzeck = 4293
Segora = 4294
Serindung = 4295
Sudan = 4296
Tananarive = 4297
Timbalai_1948 = 4298
TM65 = 4299
TM75 = 4300
Tokyo = 4301
Trinidad_1903 = 4302
TC_1948 = 4303
Voirol_1875 = 4304
Voirol_Unifie = 4305
Bern_1938 = 4306
Nord_Sahara_1959 = 4307
Stockholm_1938 = 4308
Yacare = 4309
Yoff = 4310
Zanderij = 4311
MGI = 4312
Belge_1972 = 4313
DHDN = 4314
Conakry_1905 = 4315
WGS_72 = 4322
WGS_72BE = 4324
WGS_84 = 4326
Bern_1898_Bern = 4801
Bogota_Bogota = 4802
Lisbon_Lisbon = 4803
Makassar_Jakarta = 4804
MGI_Ferro = 4805
Monte_Mario_Rome = 4806
NTF_Paris = 4807
Padang_Jakarta = 4808
Belge_1950_Brussels = 4809
Tananarive_Paris = 4810
Voirol_1875_Paris = 4811
Voirol_Unifie_Paris = 4812
Batavia_Jakarta = 4813
ATF_Paris = 4901
NDG_Paris = 4902
# New GCS
Greek = 4120
GGRS87 = 4121
KKJ = 4123
RT90 = 4124
EST92 = 4133
Dealul_Piscului_1970 = 4317
Greek_Athens = 4815
class Ellipse(enum.IntEnum):
"""Ellipsoid Codes."""
Undefined = 0
User_Defined = 32767
Airy_1830 = 7001
Airy_Modified_1849 = 7002
Australian_National_Spheroid = 7003
Bessel_1841 = 7004
Bessel_Modified = 7005
Bessel_Namibia = 7006
Clarke_1858 = 7007
Clarke_1866 = 7008
Clarke_1866_Michigan = 7009
Clarke_1880_Benoit = 7010
Clarke_1880_IGN = 7011
Clarke_1880_RGS = 7012
Clarke_1880_Arc = 7013
Clarke_1880_SGA_1922 = 7014
Everest_1830_1937_Adjustment = 7015
Everest_1830_1967_Definition = 7016
Everest_1830_1975_Definition = 7017
Everest_1830_Modified = 7018
GRS_1980 = 7019
Helmert_1906 = 7020
Indonesian_National_Spheroid = 7021
International_1924 = 7022
International_1967 = 7023
Krassowsky_1940 = 7024
NWL_9D = 7025
NWL_10D = 7026
Plessis_1817 = 7027
Struve_1860 = 7028
War_Office = 7029
WGS_84 = 7030
GEM_10C = 7031
OSU86F = 7032
OSU91A = 7033
Clarke_1880 = 7034
Sphere = 7035
class DatumE(enum.IntEnum):
"""Ellipsoid-Only Geodetic Datum Codes."""
Undefined = 0
User_Defined = 32767
Airy1830 = 6001
AiryModified1849 = 6002
AustralianNationalSpheroid = 6003
Bessel1841 = 6004
BesselModified = 6005
BesselNamibia = 6006
Clarke1858 = 6007
Clarke1866 = 6008
Clarke1866Michigan = 6009
Clarke1880_Benoit = 6010
Clarke1880_IGN = 6011
Clarke1880_RGS = 6012
Clarke1880_Arc = 6013
Clarke1880_SGA1922 = 6014
Everest1830_1937Adjustment = 6015
Everest1830_1967Definition = 6016
Everest1830_1975Definition = 6017
Everest1830Modified = 6018
GRS1980 = 6019
Helmert1906 = 6020
IndonesianNationalSpheroid = 6021
International1924 = 6022
International1967 = 6023
Krassowsky1960 = 6024
NWL9D = 6025
NWL10D = 6026
Plessis1817 = 6027
Struve1860 = 6028
WarOffice = 6029
WGS84 = 6030
GEM10C = 6031
OSU86F = 6032
OSU91A = 6033
Clarke1880 = 6034
Sphere = 6035
class Datum(enum.IntEnum):
"""Geodetic Datum Codes."""
Undefined = 0
User_Defined = 32767
Adindan = 6201
Australian_Geodetic_Datum_1966 = 6202
Australian_Geodetic_Datum_1984 = 6203
Ain_el_Abd_1970 = 6204
Afgooye = 6205
Agadez = 6206
Lisbon = 6207
Aratu = 6208
Arc_1950 = 6209
Arc_1960 = 6210
Batavia = 6211
Barbados = 6212
Beduaram = 6213
Beijing_1954 = 6214
Reseau_National_Belge_1950 = 6215
Bermuda_1957 = 6216
Bern_1898 = 6217
Bogota = 6218
Bukit_Rimpah = 6219
Camacupa = 6220
Campo_Inchauspe = 6221
Cape = 6222
Carthage = 6223
Chua = 6224
Corrego_Alegre = 6225
Cote_d_Ivoire = 6226
Deir_ez_Zor = 6227
Douala = 6228
Egypt_1907 = 6229
European_Datum_1950 = 6230
European_Datum_1987 = 6231
Fahud = 6232
Gandajika_1970 = 6233
Garoua = 6234
Guyane_Francaise = 6235
Hu_Tzu_Shan = 6236
Hungarian_Datum_1972 = 6237
Indonesian_Datum_1974 = 6238
Indian_1954 = 6239
Indian_1975 = 6240
Jamaica_1875 = 6241
Jamaica_1969 = 6242
Kalianpur = 6243
Kandawala = 6244
Kertau = 6245
Kuwait_Oil_Company = 6246
La_Canoa = 6247
Provisional_S_American_Datum_1956 = 6248
Lake = 6249
Leigon = 6250
Liberia_1964 = 6251
Lome = 6252
Luzon_1911 = 6253
Hito_XVIII_1963 = 6254
Herat_North = 6255
Mahe_1971 = 6256
Makassar = 6257
European_Reference_System_1989 = 6258
Malongo_1987 = 6259
Manoca = 6260
Merchich = 6261
Massawa = 6262
Minna = 6263
Mhast = 6264
Monte_Mario = 6265
M_poraloko = 6266
North_American_Datum_1927 = 6267
NAD_Michigan = 6268
North_American_Datum_1983 = 6269
Nahrwan_1967 = 6270
Naparima_1972 = 6271
New_Zealand_Geodetic_Datum_1949 = 6272
NGO_1948 = 6273
Datum_73 = 6274
Nouvelle_Triangulation_Francaise = 6275
NSWC_9Z_2 = 6276
OSGB_1936 = 6277
OSGB_1970_SN = 6278
OS_SN_1980 = 6279
Padang_1884 = 6280
Palestine_1923 = 6281
Pointe_Noire = 6282
Geocentric_Datum_of_Australia_1994 = 6283
Pulkovo_1942 = 6284
Qatar = 6285
Qatar_1948 = 6286
Qornoq = 6287
Loma_Quintana = 6288
Amersfoort = 6289
RT38 = 6290
South_American_Datum_1969 = 6291
Sapper_Hill_1943 = 6292
Schwarzeck = 6293
Segora = 6294
Serindung = 6295
Sudan = 6296
Tananarive_1925 = 6297
Timbalai_1948 = 6298
TM65 = 6299
TM75 = 6300
Tokyo = 6301
Trinidad_1903 = 6302
Trucial_Coast_1948 = 6303
Voirol_1875 = 6304
Voirol_Unifie_1960 = 6305
Bern_1938 = 6306
Nord_Sahara_1959 = 6307
Stockholm_1938 = 6308
Yacare = 6309
Yoff = 6310
Zanderij = 6311
Militar_Geographische_Institut = 6312
Reseau_National_Belge_1972 = 6313
Deutsche_Hauptdreiecksnetz = 6314
Conakry_1905 = 6315
WGS72 = 6322
WGS72_Transit_Broadcast_Ephemeris = 6324
WGS84 = 6326
Ancienne_Triangulation_Francaise = 6901
Nord_de_Guerre = 6902
Dealul_Piscului_1970 = 6317
class ModelType(enum.IntEnum):
"""Model Type Codes."""
Undefined = 0
User_Defined = 32767
Projected = 1
Geographic = 2
Geocentric = 3
class RasterPixel(enum.IntEnum):
"""Raster Type Codes."""
Undefined = 0
User_Defined = 32767
IsArea = 1
IsPoint = 2
class Linear(enum.IntEnum):
"""Linear Units."""
Undefined = 0
User_Defined = 32767
Meter = 9001
Foot = 9002
Foot_US_Survey = 9003
Foot_Modified_American = 9004
Foot_Clarke = 9005
Foot_Indian = 9006
Link = 9007
Link_Benoit = 9008
Link_Sears = 9009
Chain_Benoit = 9010
Chain_Sears = 9011
Yard_Sears = 9012
Yard_Indian = 9013
Fathom = 9014
Mile_International_Nautical = 9015
class Angular(enum.IntEnum):
"""Angular Units."""
Undefined = 0
User_Defined = 32767
Radian = 9101
Degree = 9102
Arc_Minute = 9103
Arc_Second = 9104
Grad = 9105
Gon = 9106
DMS = 9107
DMS_Hemisphere = 9108
class PM(enum.IntEnum):
"""Prime Meridian Codes."""
Undefined = 0
User_Defined = 32767
Greenwich = 8901
Lisbon = 8902
Paris = 8903
Bogota = 8904
Madrid = 8905
Rome = 8906
Bern = 8907
Jakarta = 8908
Ferro = 8909
Brussels = 8910
Stockholm = 8911
class CT(enum.IntEnum):
"""Coordinate Transformation Codes."""
Undefined = 0
User_Defined = 32767
TransverseMercator = 1
TransvMercator_Modified_Alaska = 2
ObliqueMercator = 3
ObliqueMercator_Laborde = 4
ObliqueMercator_Rosenmund = 5
ObliqueMercator_Spherical = 6
Mercator = 7
LambertConfConic_2SP = 8
LambertConfConic_Helmert = 9
LambertAzimEqualArea = 10
AlbersEqualArea = 11
AzimuthalEquidistant = 12
EquidistantConic = 13
Stereographic = 14
PolarStereographic = 15
ObliqueStereographic = 16
Equirectangular = 17
CassiniSoldner = 18
Gnomonic = 19
MillerCylindrical = 20
Orthographic = 21
Polyconic = 22
Robinson = 23
Sinusoidal = 24
VanDerGrinten = 25
NewZealandMapGrid = 26
TransvMercator_SouthOriented = 27
CylindricalEqualArea = 28
HotineObliqueMercatorAzimuthCenter = 9815
class VertCS(enum.IntEnum):
"""Vertical CS Type Codes."""
Undefined = 0
User_Defined = 32767
Airy_1830_ellipsoid = 5001
Airy_Modified_1849_ellipsoid = 5002
ANS_ellipsoid = 5003
Bessel_1841_ellipsoid = 5004
Bessel_Modified_ellipsoid = 5005
Bessel_Namibia_ellipsoid = 5006
Clarke_1858_ellipsoid = 5007
Clarke_1866_ellipsoid = 5008
Clarke_1880_Benoit_ellipsoid = 5010
Clarke_1880_IGN_ellipsoid = 5011
Clarke_1880_RGS_ellipsoid = 5012
Clarke_1880_Arc_ellipsoid = 5013
Clarke_1880_SGA_1922_ellipsoid = 5014
Everest_1830_1937_Adjustment_ellipsoid = 5015
Everest_1830_1967_Definition_ellipsoid = 5016
Everest_1830_1975_Definition_ellipsoid = 5017
Everest_1830_Modified_ellipsoid = 5018
GRS_1980_ellipsoid = 5019
Helmert_1906_ellipsoid = 5020
INS_ellipsoid = 5021
International_1924_ellipsoid = 5022
International_1967_ellipsoid = 5023
Krassowsky_1940_ellipsoid = 5024
NWL_9D_ellipsoid = 5025
NWL_10D_ellipsoid = 5026
Plessis_1817_ellipsoid = 5027
Struve_1860_ellipsoid = 5028
War_Office_ellipsoid = 5029
WGS_84_ellipsoid = 5030
GEM_10C_ellipsoid = 5031
OSU86F_ellipsoid = 5032
OSU91A_ellipsoid = 5033
# Orthometric Vertical CS
Newlyn = 5101
North_American_Vertical_Datum_1929 = 5102
North_American_Vertical_Datum_1988 = 5103
Yellow_Sea_1956 = 5104
Baltic_Sea = 5105
Caspian_Sea = 5106
GEO_CODES = {
'GTModelTypeGeoKey': ModelType,
'GTRasterTypeGeoKey': RasterPixel,
'GeographicTypeGeoKey': GCS,
'GeogEllipsoidGeoKey': Ellipse,
'ProjectedCSTypeGeoKey': PCS,
'ProjectionGeoKey': Proj,
'VerticalCSTypeGeoKey': VertCS,
# 'VerticalDatumGeoKey': VertCS,
'GeogLinearUnitsGeoKey': Linear,
'ProjLinearUnitsGeoKey': Linear,
'VerticalUnitsGeoKey': Linear,
'GeogAngularUnitsGeoKey': Angular,
'GeogAzimuthUnitsGeoKey': Angular,
'ProjCoordTransGeoKey': CT,
'GeogPrimeMeridianGeoKey': PM,
}
GEO_KEYS = {
1024: 'GTModelTypeGeoKey',
1025: 'GTRasterTypeGeoKey',
1026: 'GTCitationGeoKey',
2048: 'GeographicTypeGeoKey',
2049: 'GeogCitationGeoKey',
2050: 'GeogGeodeticDatumGeoKey',
2051: 'GeogPrimeMeridianGeoKey',
2052: 'GeogLinearUnitsGeoKey',
2053: 'GeogLinearUnitSizeGeoKey',
2054: 'GeogAngularUnitsGeoKey',
2055: 'GeogAngularUnitsSizeGeoKey',
2056: 'GeogEllipsoidGeoKey',
2057: 'GeogSemiMajorAxisGeoKey',
2058: 'GeogSemiMinorAxisGeoKey',
2059: 'GeogInvFlatteningGeoKey',
2060: 'GeogAzimuthUnitsGeoKey',
2061: 'GeogPrimeMeridianLongGeoKey',
2062: 'GeogTOWGS84GeoKey',
3059: 'ProjLinearUnitsInterpCorrectGeoKey', # GDAL
3072: 'ProjectedCSTypeGeoKey',
3073: 'PCSCitationGeoKey',
3074: 'ProjectionGeoKey',
3075: 'ProjCoordTransGeoKey',
3076: 'ProjLinearUnitsGeoKey',
3077: 'ProjLinearUnitSizeGeoKey',
3078: 'ProjStdParallel1GeoKey',
3079: 'ProjStdParallel2GeoKey',
3080: 'ProjNatOriginLongGeoKey',
3081: 'ProjNatOriginLatGeoKey',
3082: 'ProjFalseEastingGeoKey',
3083: 'ProjFalseNorthingGeoKey',
3084: 'ProjFalseOriginLongGeoKey',
3085: 'ProjFalseOriginLatGeoKey',
3086: 'ProjFalseOriginEastingGeoKey',
3087: 'ProjFalseOriginNorthingGeoKey',
3088: 'ProjCenterLongGeoKey',
3089: 'ProjCenterLatGeoKey',
3090: 'ProjCenterEastingGeoKey',
3091: 'ProjFalseOriginNorthingGeoKey',
3092: 'ProjScaleAtNatOriginGeoKey',
3093: 'ProjScaleAtCenterGeoKey',
3094: 'ProjAzimuthAngleGeoKey',
3095: 'ProjStraightVertPoleLongGeoKey',
3096: 'ProjRectifiedGridAngleGeoKey',
4096: 'VerticalCSTypeGeoKey',
4097: 'VerticalCitationGeoKey',
4098: 'VerticalDatumGeoKey',
4099: 'VerticalUnitsGeoKey',
}
tifffile-2018.11.28/tifffile/__init__.py 0000666 0000000 0000000 00000000207 13362544164 015750 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# tifffile/__init__.py
from .tifffile import __doc__, __all__, __version__, main
from .tifffile import *
tifffile-2018.11.28/tifffile/__main__.py 0000666 0000000 0000000 00000000161 13362253412 015721 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# tifffile/__main__.py
import sys
from .tifffile import main
sys.exit(main())
tifffile-2018.11.28/tifffile.egg-info/ 0000777 0000000 0000000 00000000000 13400351541 015316 5 ustar 0000000 0000000 tifffile-2018.11.28/tifffile.egg-info/dependency_links.txt 0000666 0000000 0000000 00000000001 13400351540 021363 0 ustar 0000000 0000000
tifffile-2018.11.28/tifffile.egg-info/entry_points.txt 0000666 0000000 0000000 00000000114 13400351540 020607 0 ustar 0000000 0000000 [console_scripts]
lsm2bin = tifffile.lsm2bin:main
tifffile = tifffile:main
tifffile-2018.11.28/tifffile.egg-info/PKG-INFO 0000666 0000000 0000000 00000062317 13400351540 016423 0 ustar 0000000 0000000 Metadata-Version: 2.1
Name: tifffile
Version: 2018.11.28
Summary: Read and write TIFF(r) files
Home-page: https://www.lfd.uci.edu/~gohlke/
Author: Christoph Gohlke
Author-email: cgohlke@uci.edu
License: BSD
Description: Read and write TIFF(r) files
============================
Tifffile is a Python library to
(1) store numpy arrays in TIFF (Tagged Image File Format) files, and
(2) read image and metadata from TIFF like files used in bioimaging.
Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, STK, LSM, NIH,
SGI, ImageJ, MicroManager, FluoView, ScanImage, SEQ, GEL, SVS, SCN, SIS, ZIF,
QPI, and GeoTIFF files.
Numpy arrays can be written to TIFF, BigTIFF, and ImageJ hyperstack compatible
files in multi-page, memory-mappable, tiled, predicted, or compressed form.
Only a subset of the TIFF specification is supported, mainly uncompressed and
losslessly compressed 1, 8, 16, 32 and 64-bit integer, 16, 32 and 64-bit float,
grayscale and RGB(A) images.
Specifically, reading slices of image data, CCITT and OJPEG compression,
chroma subsampling without JPEG compression, or IPTC and XMP metadata are not
implemented.
TIFF(r), the Tagged Image File Format, is a trademark and under control of
Adobe Systems Incorporated. BigTIFF allows for files greater than 4 GB.
STK, LSM, FluoView, SGI, SEQ, GEL, and OME-TIFF, are custom extensions
defined by Molecular Devices (Universal Imaging Corporation), Carl Zeiss
MicroImaging, Olympus, Silicon Graphics International, Media Cybernetics,
Molecular Dynamics, and the Open Microscopy Environment consortium
respectively.
For command line usage run ``python -m tifffile --help``
:Author:
`Christoph Gohlke `_
:Organization:
Laboratory for Fluorescence Dynamics, University of California, Irvine
:Version: 2018.11.28
Requirements
------------
* `CPython 2.7 or 3.5+ 64-bit `_
* `Numpy 1.14 `_
* `Imagecodecs 2018.11.8 `_
(optional; used for decoding LZW, JPEG, etc.)
* `Matplotlib 2.2 `_ (optional; used for plotting)
* Python 2.7 requires 'futures', 'enum34', and 'pathlib'.
Revisions
---------
2018.11.28
Pass 2739 tests.
Make SubIFDs accessible as TiffPage.pages.
Make parsing of TiffSequence axes pattern optional (backward incompatible).
Limit parsing of TiffSequence axes pattern to file names, not path names.
Do not interpolate in imshow if image dimensions <= 512, else use bilinear.
Use logging.warning instead of warnings.warn in many cases.
Fix numpy FutureWarning for out == 'memmap'.
Adjust ZSTD and WebP compression to libtiff-4.0.10 (WIP).
Decode old style LZW with imagecodecs >= 2018.11.8.
Remove TiffFile.qptiff_metadata (QPI metadata are per page).
Do not use keyword arguments before variable positional arguments.
Make either all or none return statements in a function return expression.
Use pytest parametrize to generate tests.
Replace test classes with functions.
2018.11.6
Rename imsave function to imwrite.
Readd Python implementations of packints, delta, and bitorder codecs.
Fix TiffFrame.compression AttributeError (bug fix).
2018.10.18
Rename tiffile package to tifffile.
2018.10.10
Pass 2710 tests.
Read ZIF, the Zoomable Image Format (WIP).
Decode YCbCr JPEG as RGB (tentative).
Improve restoration of incomplete tiles.
Allow to write grayscale with extrasamples without specifying planarconfig.
Enable decoding of PNG and JXR via imagecodecs.
Deprecate 32-bit platforms (too many memory errors during tests).
2018.9.27
Read Olympus SIS (WIP).
Allow to write non-BigTIFF files up to ~4 GB (bug fix).
Fix parsing date and time fields in SEM metadata (bug fix).
Detect some circular IFD references.
Enable WebP codecs via imagecodecs.
Add option to read TiffSequence from ZIP containers.
Remove TiffFile.isnative.
Move TIFF struct format constants out of TiffFile namespace.
2018.8.31
Pass 2699 tests.
Fix wrong TiffTag.valueoffset (bug fix).
Towards reading Hamamatsu NDPI (WIP).
Enable PackBits compression of byte and bool arrays.
Fix parsing NULL terminated CZ_SEM strings.
2018.8.24
Move tifffile.py and related modules into tiffile package.
Move usage examples to module docstring.
Enable multi-threading for compressed tiles and pages by default.
Add option to concurrently decode image tiles using threads.
Do not skip empty tiles (bug fix).
Read JPEG and J2K compressed strips and tiles.
Allow floating point predictor on write.
Add option to specify subfiletype on write.
Depend on imagecodecs package instead of _tifffile, lzma, etc modules.
Remove reverse_bitorder, unpack_ints, and decode functions.
Use pytest instead of unittest.
2018.6.20
Save RGBA with unassociated extrasample by default (backward incompatible).
Add option to specify ExtraSamples values.
2018.6.17
Pass 2680 tests.
Towards reading JPEG and other compressions via imagecodecs package (WIP).
Read SampleFormat VOID as UINT.
Add function to validate TIFF using 'jhove -m TIFF-hul'.
Save bool arrays as bilevel TIFF.
Accept pathlib.Path as filenames.
Move 'software' argument from TiffWriter __init__ to save.
Raise DOS limit to 16 TB.
Lazy load lzma and zstd compressors and decompressors.
Add option to save IJMetadata tags.
Return correct number of pages for truncated series (bug fix).
Move EXIF tags to TIFF.TAG as per TIFF/EP standard.
2018.2.18
Pass 2293 tests.
Always save RowsPerStrip and Resolution tags as required by TIFF standard.
Do not use badly typed ImageDescription.
Coherce bad ASCII string tags to bytes.
Tuning of __str__ functions.
Fix reading 'undefined' tag values (bug fix).
Read and write ZSTD compressed data.
Use hexdump to print byte strings.
Determine TIFF byte order from data dtype in imsave.
Add option to specify RowsPerStrip for compressed strips.
Allow memory-map of arrays with non-native byte order.
Attempt to handle ScanImage <= 5.1 files.
Restore TiffPageSeries.pages sequence interface.
Use numpy.frombuffer instead of fromstring to read from binary data.
Parse GeoTIFF metadata.
Add option to apply horizontal differencing before compression.
Towards reading PerkinElmer QPI (QPTIFF, no test files).
Do not index out of bounds data in tifffile.c unpackbits and decodelzw.
2017.9.29 (tentative)
Many backward incompatible changes improving speed and resource usage:
Pass 2268 tests.
Add detail argument to __str__ function. Remove info functions.
Fix potential issue correcting offsets of large LSM files with positions.
Remove TiffFile sequence interface; use TiffFile.pages instead.
Do not make tag values available as TiffPage attributes.
Use str (not bytes) type for tag and metadata strings (WIP).
Use documented standard tag and value names (WIP).
Use enums for some documented TIFF tag values.
Remove 'memmap' and 'tmpfile' options; use out='memmap' instead.
Add option to specify output in asarray functions.
Add option to concurrently decode pages using threads.
Add TiffPage.asrgb function (WIP).
Do not apply colormap in asarray.
Remove 'colormapped', 'rgbonly', and 'scale_mdgel' options from asarray.
Consolidate metadata in TiffFile _metadata functions.
Remove non-tag metadata properties from TiffPage.
Add function to convert LSM to tiled BIN files.
Align image data in file.
Make TiffPage.dtype a numpy.dtype.
Add 'ndim' and 'size' properties to TiffPage and TiffPageSeries.
Allow imsave to write non-BigTIFF files up to ~4 GB.
Only read one page for shaped series if possible.
Add memmap function to create memory-mapped array stored in TIFF file.
Add option to save empty arrays to TIFF files.
Add option to save truncated TIFF files.
Allow single tile images to be saved contiguously.
Add optional movie mode for files with uniform pages.
Lazy load pages.
Use lightweight TiffFrame for IFDs sharing properties with key TiffPage.
Move module constants to 'TIFF' namespace (speed up module import).
Remove 'fastij' option from TiffFile.
Remove 'pages' parameter from TiffFile.
Remove TIFFfile alias.
Deprecate Python 2.
Require enum34 and futures packages on Python 2.7.
Remove Record class and return all metadata as dict instead.
Add functions to parse STK, MetaSeries, ScanImage, SVS, Pilatus metadata.
Read tags from EXIF and GPS IFDs.
Use pformat for tag and metadata values.
Fix reading some UIC tags (bug fix).
Do not modify input array in imshow (bug fix).
Fix Python implementation of unpack_ints.
2017.5.23
Pass 1961 tests.
Write correct number of SampleFormat values (bug fix).
Use Adobe deflate code to write ZIP compressed files.
Add option to pass tag values as packed binary data for writing.
Defer tag validation to attribute access.
Use property instead of lazyattr decorator for simple expressions.
2017.3.17
Write IFDs and tag values on word boundaries.
Read ScanImage metadata.
Remove is_rgb and is_indexed attributes from TiffFile.
Create files used by doctests.
2017.1.12
Read Zeiss SEM metadata.
Read OME-TIFF with invalid references to external files.
Rewrite C LZW decoder (5x faster).
Read corrupted LSM files missing EOI code in LZW stream.
2017.1.1
Add option to append images to existing TIFF files.
Read files without pages.
Read S-FEG and Helios NanoLab tags created by FEI software.
Allow saving Color Filter Array (CFA) images.
Add info functions returning more information about TiffFile and TiffPage.
Add option to read specific pages only.
Remove maxpages argument (backward incompatible).
Remove test_tifffile function.
2016.10.28
Pass 1944 tests.
Improve detection of ImageJ hyperstacks.
Read TVIPS metadata created by EM-MENU (by Marco Oster).
Add option to disable using OME-XML metadata.
Allow non-integer range attributes in modulo tags (by Stuart Berg).
2016.6.21
Do not always memmap contiguous data in page series.
2016.5.13
Add option to specify resolution unit.
Write grayscale images with extra samples when planarconfig is specified.
Do not write RGB color images with 2 samples.
Reorder TiffWriter.save keyword arguments (backward incompatible).
2016.4.18
Pass 1932 tests.
TiffWriter, imread, and imsave accept open binary file streams.
2016.04.13
Correctly handle reversed fill order in 2 and 4 bps images (bug fix).
Implement reverse_bitorder in C.
2016.03.18
Fix saving additional ImageJ metadata.
2016.2.22
Pass 1920 tests.
Write 8 bytes double tag values using offset if necessary (bug fix).
Add option to disable writing second image description tag.
Detect tags with incorrect counts.
Disable color mapping for LSM.
2015.11.13
Read LSM 6 mosaics.
Add option to specify directory of memory-mapped files.
Add command line options to specify vmin and vmax values for colormapping.
2015.10.06
New helper function to apply colormaps.
Renamed is_palette attributes to is_indexed (backward incompatible).
Color-mapped samples are now contiguous (backward incompatible).
Do not color-map ImageJ hyperstacks (backward incompatible).
Towards reading Leica SCN.
2015.9.25
Read images with reversed bit order (FillOrder is LSB2MSB).
2015.9.21
Read RGB OME-TIFF.
Warn about malformed OME-XML.
2015.9.16
Detect some corrupted ImageJ metadata.
Better axes labels for 'shaped' files.
Do not create TiffTag for default values.
Chroma subsampling is not supported.
Memory-map data in TiffPageSeries if possible (optional).
2015.8.17
Pass 1906 tests.
Write ImageJ hyperstacks (optional).
Read and write LZMA compressed data.
Specify datetime when saving (optional).
Save tiled and color-mapped images (optional).
Ignore void bytecounts and offsets if possible.
Ignore bogus image_depth tag created by ISS Vista software.
Decode floating point horizontal differencing (not tiled).
Save image data contiguously if possible.
Only read first IFD from ImageJ files if possible.
Read ImageJ 'raw' format (files larger than 4 GB).
TiffPageSeries class for pages with compatible shape and data type.
Try to read incomplete tiles.
Open file dialog if no filename is passed on command line.
Ignore errors when decoding OME-XML.
Rename decoder functions (backward incompatible).
2014.8.24
TiffWriter class for incremental writing images.
Simplify examples.
2014.8.19
Add memmap function to FileHandle.
Add function to determine if image data in TiffPage is memory-mappable.
Do not close files if multifile_close parameter is False.
2014.8.10
Pass 1730 tests.
Return all extrasamples by default (backward incompatible).
Read data from series of pages into memory-mapped array (optional).
Squeeze OME dimensions (backward incompatible).
Workaround missing EOI code in strips.
Support image and tile depth tags (SGI extension).
Better handling of STK/UIC tags (backward incompatible).
Disable color mapping for STK.
Julian to datetime converter.
TIFF ASCII type may be NULL separated.
Unwrap strip offsets for LSM files greater than 4 GB.
Correct strip byte counts in compressed LSM files.
Skip missing files in OME series.
Read embedded TIFF files.
2014.2.05
Save rational numbers as type 5 (bug fix).
2013.12.20
Keep other files in OME multi-file series closed.
FileHandle class to abstract binary file handle.
Disable color mapping for bad OME-TIFF produced by bio-formats.
Read bad OME-XML produced by ImageJ when cropping.
2013.11.3
Allow zlib compress data in imsave function (optional).
Memory-map contiguous image data (optional).
2013.10.28
Read MicroManager metadata and little-endian ImageJ tag.
Save extra tags in imsave function.
Save tags in ascending order by code (bug fix).
2012.10.18
Accept file like objects (read from OIB files).
2012.8.21
Rename TIFFfile to TiffFile and TIFFpage to TiffPage.
TiffSequence class for reading sequence of TIFF files.
Read UltraQuant tags.
Allow float numbers as resolution in imsave function.
2012.8.3
Read MD GEL tags and NIH Image header.
2012.7.25
Read ImageJ tags.
...
Notes
-----
The API is not stable yet and might change between revisions.
Tested on little-endian platforms only.
Python 2.7, 3.4, and 32-bit versions are deprecated.
Other libraries for reading scientific TIFF files from Python:
* `Python-bioformats `_
* `Imread `_
* `GDAL `_
* `OpenSlide-python `_
* `PyLibTiff `_
* `SimpleITK `_
* `PyLSM `_
* `PyMca.TiffIO.py `_ (same as fabio.TiffIO)
* `BioImageXD.Readers `_
* `Cellcognition.io `_
* `pymimage `_
* `pytiff `_
Acknowledgements
----------------
* Egor Zindy, University of Manchester, for lsm_scan_info specifics.
* Wim Lewis for a bug fix and some LSM functions.
* Hadrien Mary for help on reading MicroManager files.
* Christian Kliche for help writing tiled and color-mapped files.
References
----------
1) TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated.
https://www.adobe.io/open/standards/TIFF.html
2) TIFF File Format FAQ. https://www.awaresystems.be/imaging/tiff/faq.html
3) MetaMorph Stack (STK) Image File Format.
http://mdc.custhelp.com/app/answers/detail/a_id/18862
4) Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010).
Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011
5) The OME-TIFF format.
https://docs.openmicroscopy.org/ome-model/5.6.4/ome-tiff/
6) UltraQuant(r) Version 6.0 for Windows Start-Up Guide.
http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf
7) Micro-Manager File Formats.
https://micro-manager.org/wiki/Micro-Manager_File_Formats
8) Tags for TIFF and Related Specifications. Digital Preservation.
https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml
9) ScanImage BigTiff Specification - ScanImage 2016.
http://scanimage.vidriotechnologies.com/display/SI2016/
ScanImage+BigTiff+Specification
10) CIPA DC-008-2016: Exchangeable image file format for digital still cameras:
Exif Version 2.31.
http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf
11) ZIF, the Zoomable Image File format. http://zif.photo/
Examples
--------
Save a 3D numpy array to a multi-page, 16-bit grayscale TIFF file:
>>> data = numpy.random.randint(0, 2**16, (4, 301, 219), 'uint16')
>>> imwrite('temp.tif', data, photometric='minisblack')
Read the whole image stack from the TIFF file as numpy array:
>>> image_stack = imread('temp.tif')
>>> image_stack.shape
(4, 301, 219)
>>> image_stack.dtype
dtype('uint16')
Read the image from first page (IFD) in the TIFF file:
>>> image = imread('temp.tif', key=0)
>>> image.shape
(301, 219)
Read images from a sequence of TIFF files as numpy array:
>>> image_sequence = imread(['temp.tif', 'temp.tif'])
>>> image_sequence.shape
(2, 4, 301, 219)
Save a numpy array to a single-page RGB TIFF file:
>>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8')
>>> imwrite('temp.tif', data, photometric='rgb')
Save a floating-point array and metadata, using zlib compression:
>>> data = numpy.random.rand(2, 5, 3, 301, 219).astype('float32')
>>> imwrite('temp.tif', data, compress=6, metadata={'axes': 'TZCYX'})
Save a volume with xyz voxel size 2.6755x2.6755x3.9474 µm^3 to ImageJ file:
>>> volume = numpy.random.randn(57*256*256).astype('float32')
>>> volume.shape = 1, 57, 1, 256, 256, 1 # dimensions in TZCYXS order
>>> imwrite('temp.tif', volume, imagej=True, resolution=(1./2.6755, 1./2.6755),
... metadata={'spacing': 3.947368, 'unit': 'um'})
Read hyperstack and metadata from ImageJ file:
>>> with TiffFile('temp.tif') as tif:
... imagej_hyperstack = tif.asarray()
... imagej_metadata = tif.imagej_metadata
>>> imagej_hyperstack.shape
(57, 256, 256)
>>> imagej_metadata['slices']
57
Create an empty TIFF file and write to the memory-mapped numpy array:
>>> memmap_image = memmap('temp.tif', shape=(256, 256), dtype='float32')
>>> memmap_image[255, 255] = 1.0
>>> memmap_image.flush()
>>> memmap_image.shape, memmap_image.dtype
((256, 256), dtype('float32'))
>>> del memmap_image
Memory-map image data in the TIFF file:
>>> memmap_image = memmap('temp.tif', page=0)
>>> memmap_image[255, 255]
1.0
>>> del memmap_image
Successively append images to a BigTIFF file:
>>> data = numpy.random.randint(0, 255, (5, 2, 3, 301, 219), 'uint8')
>>> with TiffWriter('temp.tif', bigtiff=True) as tif:
... for i in range(data.shape[0]):
... tif.save(data[i], compress=6, photometric='minisblack')
Iterate over pages and tags in the TIFF file and successively read images:
>>> with TiffFile('temp.tif') as tif:
... image_stack = tif.asarray()
... for page in tif.pages:
... for tag in page.tags.values():
... tag_name, tag_value = tag.name, tag.value
... image = page.asarray()
Save two image series to a TIFF file:
>>> data0 = numpy.random.randint(0, 255, (301, 219, 3), 'uint8')
>>> data1 = numpy.random.randint(0, 255, (5, 301, 219), 'uint16')
>>> with TiffWriter('temp.tif') as tif:
... tif.save(data0, compress=6, photometric='rgb')
... tif.save(data1, compress=6, photometric='minisblack')
Read the second image series from the TIFF file:
>>> series1 = imread('temp.tif', series=1)
>>> series1.shape
(5, 301, 219)
Read an image stack from a sequence of TIFF files with a file name pattern:
>>> imwrite('temp_C001T001.tif', numpy.random.rand(64, 64))
>>> imwrite('temp_C001T002.tif', numpy.random.rand(64, 64))
>>> image_sequence = TiffSequence('temp_C001*.tif', pattern='axes')
>>> image_sequence.shape
(1, 2)
>>> image_sequence.axes
'CT'
>>> data = image_sequence.asarray()
>>> data.shape
(1, 2, 64, 64)
Platform: any
Classifier: Development Status :: 4 - Beta
Classifier: License :: OSI Approved :: BSD License
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Requires-Python: >=2.7
Provides-Extra: all
tifffile-2018.11.28/tifffile.egg-info/requires.txt 0000666 0000000 0000000 00000000247 13400351540 017720 0 ustar 0000000 0000000 numpy>=1.11.3
[:platform_system == "Windows"]
imagecodecs>=2018.11.8
[:python_version == "2.7"]
pathlib
enum34
futures
[all]
matplotlib>=2.2
imagecodecs>=2018.11.8
tifffile-2018.11.28/tifffile.egg-info/SOURCES.txt 0000666 0000000 0000000 00000000634 13400351540 017204 0 ustar 0000000 0000000 LICENSE
MANIFEST.in
README.rst
setup.py
setup_tiffile.py
tiffile.py
tests/conftest.py
tests/test_tifffile.py
tifffile/__init__.py
tifffile/__main__.py
tifffile/lsm2bin.py
tifffile/tifffile.py
tifffile/tifffile_geodb.py
tifffile.egg-info/PKG-INFO
tifffile.egg-info/SOURCES.txt
tifffile.egg-info/dependency_links.txt
tifffile.egg-info/entry_points.txt
tifffile.egg-info/requires.txt
tifffile.egg-info/top_level.txt tifffile-2018.11.28/tifffile.egg-info/top_level.txt 0000666 0000000 0000000 00000000011 13400351540 020037 0 ustar 0000000 0000000 tifffile
tifffile-2018.11.28/tiffile.py 0000666 0000000 0000000 00000000500 13362252477 014042 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
# tiffile.py
"""Proxy module for the tifffile package."""
from tifffile.tifffile import __doc__, __all__, __version__ # noqa
from tifffile.tifffile import lsm2bin, main # noqa
from tifffile.tifffile import * # noqa
if __name__ == '__main__':
import sys
sys.exit(main())