././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.169433
tomoscan-1.2.2/ 0000755 0236253 0006511 00000000000 00000000000 013473 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1628058071.0
tomoscan-1.2.2/LICENSE 0000644 0236253 0006511 00000002345 00000000000 014504 0 ustar 00payno soft 0000000 0000000
The tomoscan library goal is to provide a python interface to read ESRF tomography dataset.
tomoscan is distributed under the MIT license.
The MIT license follows:
Copyright (c) European Synchrotron Radiation Facility (ESRF)
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.169433
tomoscan-1.2.2/PKG-INFO 0000644 0236253 0006511 00000001667 00000000000 014602 0 ustar 00payno soft 0000000 0000000 Metadata-Version: 2.1
Name: tomoscan
Version: 1.2.2
Summary: "utilitary to access tomography data at esrf"
Home-page: https://gitlab.esrf.fr/tomotools/tomoscan
Author: data analysis unit
Author-email: henri.payno@esrf.fr
License: MIT
Project-URL: Bug Tracker, https://gitlab.esrf.fr/tomotools/tomoscan/-/issues
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Environment :: Console
Classifier: Environment :: X11 Applications :: Qt
Classifier: Operating System :: POSIX
Classifier: Natural Language :: English
Classifier: Topic :: Scientific/Engineering :: Physics
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.6
Description-Content-Type: text/markdown
Provides-Extra: doc
Provides-Extra: full
Provides-Extra: setup_requires
License-File: LICENSE
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/README.md 0000644 0236253 0006511 00000001263 00000000000 014754 0 ustar 00payno soft 0000000 0000000 tomoscan
========
This library is offering an abstraction to access tomography data from various file format.
It can read:
- acquisitions from spec (.edf) and bliss (.hdf5)
- volumes:
- single frame file: EDF, JP2K, tiff
- multi frame: HDF5, multitiff
installation
''''''''''''
To install the latest 'tomoscan' pip package
.. code-block:: bash
pip install tomoscan
You can also install tomoscanfrom source:
.. code-block:: bash
pip install git+https://gitlab.esrf.fr/tomotools/tomoscan.git
documentation
'''''''''''''
General documentation can be found here: `https://tomotools.gitlab-pages.esrf.fr/tomoscan/ `_
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.169433
tomoscan-1.2.2/setup.cfg 0000644 0236253 0006511 00000002275 00000000000 015322 0 ustar 00payno soft 0000000 0000000 [metadata]
name = tomoscan
version = attr: tomoscan.__version__
author = data analysis unit
author_email = henri.payno@esrf.fr
description = "utilitary to access tomography data at esrf"
long_description = file: README.rst
long_description_content_type = text/markdown
license = MIT
url = https://gitlab.esrf.fr/tomotools/tomoscan
project_urls =
Bug Tracker = https://gitlab.esrf.fr/tomotools/tomoscan/-/issues
classifiers =
Intended Audience :: Education
Intended Audience :: Science/Research
License :: OSI Approved :: MIT License
Programming Language :: Python :: 3
Environment :: Console
Environment :: X11 Applications :: Qt
Operating System :: POSIX
Natural Language :: English
Topic :: Scientific/Engineering :: Physics
Topic :: Software Development :: Libraries :: Python Modules
[options]
packages = find:
python_requires = >=3.6
install_requires =
setuptools
h5py>=3.0
silx>=0.14a
lxml
dicttoxml
packaging
[options.extras_require]
doc =
Sphinx>=4.0.0, <5.2.0
nbsphinx
pandoc
ipykernel
jupyter_client
nbconvert
h5glance
pytest
full =
%(doc)s
glymur
tifffile
setup_requires =
setuptools
numpy
[build_sphinx]
source-dir = ./doc
[egg_info]
tag_build =
tag_date = 0
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/setup.py 0000644 0236253 0006511 00000002717 00000000000 015214 0 ustar 00payno soft 0000000 0000000 #!/usr/bin/python
# coding: utf8
# /*##########################################################################
#
# Copyright (c) 2015-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno", "P. Paleo", "C. Nemoz"]
__date__ = "26/07/2021"
__license__ = "MIT"
import setuptools
if __name__ == "__main__":
setuptools.setup()
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1684328205.1574328
tomoscan-1.2.2/tomoscan/ 0000755 0236253 0006511 00000000000 00000000000 015316 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1648131532.0
tomoscan-1.2.2/tomoscan/__init__.py 0000644 0236253 0006511 00000000103 00000000000 017421 0 ustar 00payno soft 0000000 0000000 from .version import version as __version
__version__ = __version
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.161433
tomoscan-1.2.2/tomoscan/esrf/ 0000755 0236253 0006511 00000000000 00000000000 016255 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/__init__.py 0000644 0236253 0006511 00000003712 00000000000 020371 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
__authors__ = ["H.Payno"]
__license__ = "MIT"
__date__ = "09/08/2018"
from .scan.hdf5scan import HDF5TomoScan # noqa F401
from .scan.hdf5scan import HDF5XRD3DScan # noqa F401
from .scan.edfscan import EDFTomoScan # noqa F401
from .volume.hdf5volume import HDF5Volume # noqa F401
from .volume.edfvolume import EDFVolume # noqa F401
from .volume.tiffvolume import TIFFVolume # noqa F401
from .volume.tiffvolume import MultiTIFFVolume # noqa F401
from .volume.jp2kvolume import JP2KVolume # noqa F401
from .volume.rawvolume import RawVolume # noqa F401
from .volume.jp2kvolume import has_glymur # noqa F401
from .volume.tiffvolume import has_tifffile # noqa F401
TYPES = ["EDF", "HDF5"]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1660290689.0
tomoscan-1.2.2/tomoscan/esrf/edfscan.py 0000644 0236253 0006511 00000000407 00000000000 020233 0 ustar 00payno soft 0000000 0000000 from silx.utils.deprecation import deprecated_warning
deprecated_warning(
"Module",
name="tomoscan.esrf.edfscan",
reason="Have been moved",
replacement="tomoscan.esrf.scan.edfscan",
only_once=True,
)
from .scan.edfscan import * # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1660290689.0
tomoscan-1.2.2/tomoscan/esrf/hdf5scan.py 0000644 0236253 0006511 00000000412 00000000000 020317 0 ustar 00payno soft 0000000 0000000 from silx.utils.deprecation import deprecated_warning
deprecated_warning(
"Module",
name="tomoscan.esrf.hdf5scan",
reason="Have been moved",
replacement="tomoscan.esrf.scan.hdf5scan",
only_once=True,
)
from .scan.hdf5scan import * # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.161433
tomoscan-1.2.2/tomoscan/esrf/identifier/ 0000755 0236253 0006511 00000000000 00000000000 020377 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/identifier/__init__.py 0000644 0236253 0006511 00000003345 00000000000 022515 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""This module is dedicated to instances of :class:`BaseIdentifier` used at esrf"""
from .hdf5Identifier import HDF5TomoScanIdentifier # noqa F401
from .edfidentifier import EDFTomoScanIdentifier # noqa F401
from .jp2kidentifier import JP2KVolumeIdentifier # noqa F401
from .tiffidentifier import TIFFVolumeIdentifier # noqa F401
from .tiffidentifier import MultiTiffVolumeIdentifier # noqa F401
from .rawidentifier import RawVolumeIdentifier # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1669995108.0
tomoscan-1.2.2/tomoscan/esrf/identifier/edfidentifier.py 0000644 0236253 0006511 00000005556 00000000000 023565 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "27/01/2022"
from tomoscan.esrf.identifier.folderidentifier import (
BaseFolderAndfilePrefixIdentifierMixIn,
)
from tomoscan.identifier import ScanIdentifier, VolumeIdentifier
from tomoscan.utils import docstring
class _BaseEDFIdentifier(BaseFolderAndfilePrefixIdentifierMixIn):
"""Identifier specific to EDF TomoScan"""
@property
@docstring(ScanIdentifier)
def scheme(self) -> str:
return "edf"
class EDFTomoScanIdentifier(_BaseEDFIdentifier, ScanIdentifier):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, tomo_type=ScanIdentifier.TOMO_TYPE)
@staticmethod
def from_str(identifier):
from tomoscan.esrf.scan.edfscan import EDFTomoScan
return (
BaseFolderAndfilePrefixIdentifierMixIn._from_str_to_single_frame_identifier(
identifier=identifier,
SingleFrameIdentifierClass=EDFTomoScanIdentifier,
ObjClass=EDFTomoScan,
)
)
class EDFVolumeIdentifier(_BaseEDFIdentifier, VolumeIdentifier):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, tomo_type=VolumeIdentifier.TOMO_TYPE)
@staticmethod
def from_str(identifier):
from tomoscan.esrf.volume.edfvolume import EDFVolume
return (
BaseFolderAndfilePrefixIdentifierMixIn._from_str_to_single_frame_identifier(
identifier=identifier,
SingleFrameIdentifierClass=EDFVolumeIdentifier,
ObjClass=EDFVolume,
)
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/identifier/folderidentifier.py 0000644 0236253 0006511 00000012702 00000000000 024271 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "01/02/2022"
import os
from urllib.parse import ParseResult, urlparse
from tomoscan.esrf.identifier.url_utils import (
UrlSettings,
join_path,
join_query,
split_path,
split_query,
reduce_file_path,
)
class BaseFolderIdentifierMixIn:
"""Identifier specific to a folder. Used for single frame edf and jp2g for example"""
def __init__(self, object, folder, tomo_type):
super().__init__(object)
self._folder = os.path.realpath(os.path.abspath(folder))
self.__tomo_type = tomo_type
def short_description(self) -> str:
folder_name = reduce_file_path(os.path.basename(self.folder))
return ParseResult(
scheme="",
path=join_path((self.tomo_type, folder_name)),
query=None,
netloc=None,
params=None,
fragment=None,
).geturl()
@property
def tomo_type(self):
# warning: this property will probably be overwrite
return self.__tomo_type
@property
def folder(self):
return self._folder
@property
def scheme(self) -> str:
raise NotImplementedError("base class")
def __str__(self):
return ParseResult(
scheme=self.scheme,
path=join_path((self.tomo_type, self.folder)),
query=None,
netloc=None,
params=None,
fragment=None,
).geturl()
def __eq__(self, other):
if isinstance(other, BaseFolderIdentifierMixIn):
return self.folder == other.folder and self.tomo_type == other.tomo_type
else:
return super().__eq__(other)
def __hash__(self):
return hash(self.folder)
class BaseFolderAndfilePrefixIdentifierMixIn(BaseFolderIdentifierMixIn):
def __init__(self, object, folder, file_prefix, tomo_type):
super().__init__(object, folder, tomo_type)
self._file_prefix = file_prefix
def short_description(self) -> str:
query = []
if self.file_prefix not in (None, ""):
query.append(
("file_prefix", self.file_prefix),
)
file_path = reduce_file_path(self.folder)
return ParseResult(
scheme="",
path=join_path((self.tomo_type, file_path)),
query=join_query(query),
netloc=None,
params=None,
fragment=None,
).geturl()
@property
def file_prefix(self) -> str:
return self._file_prefix
def __str__(self):
query = []
if self.file_prefix not in (None, ""):
query.append(
("file_prefix", self.file_prefix),
)
return ParseResult(
scheme=self.scheme,
path=join_path((self.tomo_type, self.folder)),
query=join_query(query),
netloc=None,
params=None,
fragment=None,
).geturl()
def __eq__(self, other):
return super().__eq__(other) and self.file_prefix == other.file_prefix
def __hash__(self):
return hash(self.folder) + hash(self.file_prefix)
@staticmethod
def _from_str_to_single_frame_identifier(
identifier: str, SingleFrameIdentifierClass, ObjClass: type
):
"""
Common function to build an identifier from a str. Might be moved to the factory directly one day ?
"""
info = urlparse(identifier)
paths = split_path(info.path)
if len(paths) == 1:
jp2k_folder = paths[0]
tomo_type = None
elif len(paths) == 2:
tomo_type, jp2k_folder = paths
else:
raise ValueError("Failed to parse path string:", info.path)
if tomo_type is not None and tomo_type != SingleFrameIdentifierClass.TOMO_TYPE:
raise TypeError(
f"provided identifier fits {tomo_type} and not {SingleFrameIdentifierClass.TOMO_TYPE}"
)
queries = split_query(info.query)
file_prefix = queries.get(UrlSettings.FILE_PREFIX, None)
return SingleFrameIdentifierClass(
object=ObjClass, folder=jp2k_folder, file_prefix=file_prefix
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/identifier/hdf5Identifier.py 0000644 0236253 0006511 00000013321 00000000000 023602 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "10/01/2022"
from urllib.parse import ParseResult, urlparse
from tomoscan.esrf.identifier.url_utils import (
UrlSettings,
join_path,
join_query,
split_path,
split_query,
reduce_file_path,
)
from tomoscan.identifier import ScanIdentifier, VolumeIdentifier
import os
from tomoscan.utils import docstring
class _HDF5IdentifierMixIn:
def __init__(self, object, hdf5_file, entry, tomo_type):
super().__init__(object)
self._file_path = os.path.realpath(os.path.abspath(hdf5_file))
self._data_path = entry
self._tomo_type = tomo_type
@property
def tomo_type(self):
return self._tomo_type
@docstring(ScanIdentifier)
def short_description(self) -> str:
file_name = reduce_file_path(os.path.basename(self._file_path))
return ParseResult(
scheme="",
path=join_path(
(self.tomo_type, file_name),
),
query=join_query(
((UrlSettings.DATA_PATH_KEY, self.data_path),),
),
netloc=None,
params=None,
fragment=None,
).geturl()
@property
def file_path(self):
return self._file_path
@property
def data_path(self):
return self._data_path
@property
@docstring(ScanIdentifier)
def scheme(self) -> str:
return "hdf5"
def __str__(self):
return ParseResult(
scheme=self.scheme,
path=join_path(
(self.tomo_type, self._file_path),
),
query=join_query(
((UrlSettings.DATA_PATH_KEY, self.data_path),),
),
netloc=None,
params=None,
fragment=None,
).geturl()
def __eq__(self, other):
if isinstance(other, HDF5TomoScanIdentifier):
return (
self.tomo_type == other.tomo_type
and self._file_path == other._file_path
and self._data_path == other._data_path
)
else:
return super().__eq__(other)
def __hash__(self):
return hash((self._file_path, self._data_path))
class HDF5TomoScanIdentifier(_HDF5IdentifierMixIn, ScanIdentifier):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, tomo_type=ScanIdentifier.TOMO_TYPE)
@staticmethod
def from_str(identifier):
info = urlparse(identifier)
paths = split_path(info.path)
if len(paths) == 1:
hdf5_file = paths[0]
tomo_type = None
elif len(paths) == 2:
tomo_type, hdf5_file = paths
else:
raise ValueError("Failed to parse path string:", info.path)
if tomo_type is not None and tomo_type != HDF5TomoScanIdentifier.TOMO_TYPE:
raise TypeError(
f"provided identifier fits {tomo_type} and not {HDF5TomoScanIdentifier.TOMO_TYPE}"
)
queries = split_query(info.query)
entry = queries.get(UrlSettings.DATA_PATH_KEY, None)
if entry is None:
raise ValueError(f"expects to get {UrlSettings.DATA_PATH_KEY} query")
from tomoscan.esrf.scan.hdf5scan import HDF5TomoScan
return HDF5TomoScanIdentifier(
object=HDF5TomoScan, hdf5_file=hdf5_file, entry=entry
)
class HDF5VolumeIdentifier(_HDF5IdentifierMixIn, VolumeIdentifier):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, tomo_type=VolumeIdentifier.TOMO_TYPE)
@staticmethod
def from_str(identifier):
info = urlparse(identifier)
paths = split_path(info.path)
if len(paths) == 1:
hdf5_file = paths[0]
tomo_type = None
elif len(paths) == 2:
tomo_type, hdf5_file = paths
else:
raise ValueError("Failed to parse path string:", info.path)
if tomo_type is not None and tomo_type != VolumeIdentifier.TOMO_TYPE:
raise TypeError(
f"provided identifier fits {tomo_type} and not {VolumeIdentifier.TOMO_TYPE}"
)
queries = split_query(info.query)
entry = queries.get(UrlSettings.DATA_PATH_KEY, None)
if entry is None:
raise ValueError("expects to get a data_path")
from tomoscan.esrf.volume.hdf5volume import HDF5Volume
return HDF5VolumeIdentifier(object=HDF5Volume, hdf5_file=hdf5_file, entry=entry)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1669995108.0
tomoscan-1.2.2/tomoscan/esrf/identifier/jp2kidentifier.py 0000644 0236253 0006511 00000004376 00000000000 023674 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "01/02/2022"
from tomoscan.esrf.identifier.folderidentifier import (
BaseFolderAndfilePrefixIdentifierMixIn,
)
from tomoscan.identifier import VolumeIdentifier
from tomoscan.utils import docstring
class JP2KVolumeIdentifier(BaseFolderAndfilePrefixIdentifierMixIn, VolumeIdentifier):
"""Identifier specific to JP2K volume"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, tomo_type=VolumeIdentifier.TOMO_TYPE)
@property
@docstring(VolumeIdentifier)
def scheme(self) -> str:
return "jp2k"
@staticmethod
def from_str(identifier):
from tomoscan.esrf.volume.jp2kvolume import JP2KVolume
return (
BaseFolderAndfilePrefixIdentifierMixIn._from_str_to_single_frame_identifier(
identifier=identifier,
SingleFrameIdentifierClass=JP2KVolumeIdentifier,
ObjClass=JP2KVolume,
)
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/identifier/rawidentifier.py 0000644 0236253 0006511 00000005415 00000000000 023612 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "04/07/2022"
from tomoscan.identifier import VolumeIdentifier
import os
from tomoscan.utils import docstring
from tomoscan.esrf.identifier.url_utils import reduce_file_path
class RawVolumeIdentifier(VolumeIdentifier):
"""Identifier for the .vol volume"""
def __init__(self, object, file_path):
super().__init__(object)
self._file_path = os.path.realpath(os.path.abspath(file_path))
@docstring(VolumeIdentifier)
def short_description(self) -> str:
file_path = reduce_file_path(os.path.basename(self._file_path))
return f"{self.tomo_type}:{file_path}"
@property
def file_path(self):
return self._file_path
@property
@docstring(VolumeIdentifier)
def scheme(self) -> str:
return "raw"
def __str__(self):
return f"{self.scheme}:{self.tomo_type}:{self._file_path}"
def __eq__(self, other):
if isinstance(other, RawVolumeIdentifier):
return (
self.tomo_type == other.tomo_type
and self._file_path == other._file_path
)
else:
return False
def __hash__(self):
return hash(self._file_path)
@staticmethod
def from_str(identifier):
identifier_no_scheme = identifier.split(":")[-1]
vol_file = identifier_no_scheme
from tomoscan.esrf.volume.rawvolume import RawVolume
return RawVolumeIdentifier(object=RawVolume, file_path=vol_file)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/identifier/tiffidentifier.py 0000644 0236253 0006511 00000007206 00000000000 023751 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "01/02/2022"
from tomoscan.esrf.identifier.folderidentifier import (
BaseFolderAndfilePrefixIdentifierMixIn,
)
from tomoscan.identifier import VolumeIdentifier
import os
from tomoscan.utils import docstring
from tomoscan.esrf.identifier.url_utils import reduce_file_path
class TIFFVolumeIdentifier(BaseFolderAndfilePrefixIdentifierMixIn, VolumeIdentifier):
"""Identifier specific to (single frame) tiff volume"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs, tomo_type=VolumeIdentifier.TOMO_TYPE)
@property
@docstring(VolumeIdentifier)
def scheme(self) -> str:
return "tiff"
@staticmethod
def from_str(identifier):
from tomoscan.esrf.volume.tiffvolume import TIFFVolume
return (
BaseFolderAndfilePrefixIdentifierMixIn._from_str_to_single_frame_identifier(
identifier=identifier,
SingleFrameIdentifierClass=TIFFVolumeIdentifier,
ObjClass=TIFFVolume,
)
)
class MultiTiffVolumeIdentifier(VolumeIdentifier):
def __init__(self, object, tiff_file):
super().__init__(object)
self._file_path = os.path.realpath(os.path.abspath(tiff_file))
@docstring(VolumeIdentifier)
def short_description(self) -> str:
file_path = reduce_file_path(os.path.basename(self._file_path))
return f"{self.tomo_type}:{file_path}"
@property
def file_path(self):
return self._file_path
@property
@docstring(VolumeIdentifier)
def scheme(self) -> str:
return "tiff3d"
def __str__(self):
return f"{self.scheme}:{self.tomo_type}:{self._file_path}"
def __eq__(self, other):
if isinstance(other, MultiTiffVolumeIdentifier):
return (
self.tomo_type == other.tomo_type
and self._file_path == other._file_path
)
else:
return super().__eq__(other)
def __hash__(self):
return hash(self._file_path)
@staticmethod
def from_str(identifier):
identifier_no_scheme = identifier.split(":")[-1]
# TODO: check tomo_type ?
tiff_file = identifier_no_scheme
from tomoscan.esrf.volume.tiffvolume import TIFFVolume
return MultiTiffVolumeIdentifier(object=TIFFVolume, tiff_file=tiff_file)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/identifier/url_utils.py 0000644 0236253 0006511 00000005274 00000000000 023003 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["W. DeNolf", "H. Payno"]
__license__ = "MIT"
__date__ = "12/05/2022"
from typing import Iterable, Tuple
class UrlSettings:
FILE_PATH_KEY = "file_path"
DATA_PATH_KEY = "data_path"
FILE_PREFIX = "file_prefix"
def split_query(query: str) -> dict:
result = dict()
for s in query.split("&"):
if not s:
continue
name, _, value = s.partition("=")
prev_value = result.get(name)
if prev_value:
value = join_string(prev_value, value, "/")
result[name] = value
return result
def join_query(query_items: Iterable[Tuple[str, str]]) -> str:
return "&".join(f"{k}={v}" for k, v in query_items)
def join_string(a: str, b: str, sep: str):
aslash = a.endswith(sep)
bslash = b.startswith(sep)
if aslash and bslash:
return a[:-1] + b
elif aslash or bslash:
return a + b
else:
return a + sep + b
def join_path(path_items: tuple) -> str:
if not isinstance(path_items, tuple):
raise TypeError
return ":".join(path_items)
def split_path(path: str):
return path.split(":")
def reduce_file_path(file_path):
"""
util used by the short_description in order to display only the beginning and the end of a string
it this one is longer than the expected value
"""
if len(file_path) > 23:
file_path = f"{file_path[:10]}...{file_path[-10:]}"
return file_path
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1676905301.0
tomoscan-1.2.2/tomoscan/esrf/mock.py 0000644 0236253 0006511 00000000376 00000000000 017566 0 ustar 00payno soft 0000000 0000000 from silx.utils.deprecation import deprecated_warning
deprecated_warning(
"Module",
name="tomoscan.esrf.mock",
reason="Have been moved",
replacement="tomoscan.esrf.scan.mock",
only_once=True,
)
from .scan.mock import * # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.161433
tomoscan-1.2.2/tomoscan/esrf/scan/ 0000755 0236253 0006511 00000000000 00000000000 017201 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1675868066.0
tomoscan-1.2.2/tomoscan/esrf/scan/__init__.py 0000644 0236253 0006511 00000000000 00000000000 021300 0 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/scan/edfscan.py 0000644 0236253 0006511 00000125373 00000000000 021171 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
"""contains EDFTomoScan, class to be used with EDF acquisition"""
__authors__ = ["H.Payno"]
__license__ = "MIT"
__date__ = "10/10/2019"
import os
import re
import fabio
import copy
from lxml import etree
import json
import io
from typing import Union, Iterable, Optional
import numpy
from silx.io.url import DataUrl
from silx.utils.deprecation import deprecated
from tomoscan.esrf.identifier.edfidentifier import EDFTomoScanIdentifier
from tomoscan.scanbase import TomoScanBase
from tomoscan.scanbase import Source
from .utils import get_parameters_frm_par_or_info, extract_urls_from_edf
from tomoscan.unitsystem.metricsystem import MetricSystem
from tomoscan.utils import docstring
from .framereducer import EDFFrameReducer
import logging
from tomoscan.identifier import ScanIdentifier
_logger = logging.getLogger(__name__)
class EDFTomoScan(TomoScanBase):
"""
TomoScanBase instanciation for scan defined from .edf files
:param scan: path to the root folder containing the scan.
:type scan: Union[str,None]
:param dataset_basename: prefix of the dataset to handle
:type: Optional[str]
:param scan_info: dictionary providing dataset information. Provided keys will overwrite information contained in .info.
Valid keys are: TODO
:type: Optional[dict]
:param n_frames: Number of frames in each EDF file.
If not provided, it will be inferred by reading the files.
In this case, the frame number is guessed from the file name.
:type n_frames: Union[int, None]=None
"""
_TYPE = "edf"
INFO_EXT = ".info"
ABORT_FILE = ".abo"
_REFHST_PREFIX = "refHST"
_DARKHST_PREFIX = "dark.edf"
_SCHEME = "fabio"
REDUCED_DARKS_DATAURLS = (
DataUrl(
file_path="{scan_prefix}_darks.hdf5",
data_path="{entry}/darks/{index}",
scheme="silx",
), # _darks.hdf5 and _flats.hdf5 are the default location of the reduced darks and flats.
DataUrl(file_path="dark.edf", scheme=_SCHEME),
)
REDUCED_DARKS_METADATAURLS = (
DataUrl(
file_path="{scan_prefix}_darks.hdf5",
data_path="{entry}/darks/",
scheme="silx",
),
# even if no metadata urls are provided for EDF. If the output is the EDF metadata will be stored in the headers
)
REDUCED_FLATS_DATAURLS = (
DataUrl(
file_path="{scan_prefix}_flats.hdf5",
data_path="{entry}/flats/{index}",
scheme="silx",
), # _darks.hdf5 and _flats.hdf5 are the default location of the reduced darks and flats.
DataUrl(
file_path="refHST{index_zfill4}.edf", scheme=_SCHEME
), # .edf is kept for compatiblity
)
REDUCED_FLATS_METADATAURLS = (
DataUrl(
file_path="{scan_prefix}_flats.hdf5",
data_path="{entry}/flats/",
scheme="silx",
),
# even if no metadata urls are provided for EDF. If the output is the EDF metadata will be stored in the headers
)
FRAME_REDUCER_CLASS = EDFFrameReducer
def __init__(
self,
scan: Optional[str],
dataset_basename: Optional[str] = None,
scan_info: Optional[dict] = None,
n_frames: Optional[int] = None,
ignore_projections: Optional[Iterable] = None,
):
TomoScanBase.__init__(
self, scan=scan, type_=self._TYPE, ignore_projections=ignore_projections
)
# data caches
self._darks = None
self._flats = None
self.__tomo_n = None
self.__flat_n = None
self.__dark_n = None
self.__dim1 = None
self.__dim2 = None
self.__pixel_size = None
self.__flat_on = None
self.__scan_range = None
self._edf_n_frames = n_frames
self.__distance = None
self.__energy = None
self.__estimated_cor_frm_motor = None
self._source = Source()
"""Source is not handle by EDFScan"""
self._scan_info = None
self.scan_info = scan_info
self._dataset_basename = dataset_basename
@property
def scan_info(self) -> Optional[dict]:
return self._scan_info
@scan_info.setter
def scan_info(self, scan_info: Optional[dict]) -> None:
if not isinstance(scan_info, (type(None), dict)):
raise TypeError("scan info is expected to be None or an instance of dict")
used_keys = (
"TOMO_N",
"DARK_N",
"REF_N",
"REF_ON",
"ScanRange",
"Dim_1",
"Dim_2",
"Distance",
"PixelSize",
"SrCurrent",
)
other_keys = (
"Prefix",
"Directory",
"Y_STEP",
"Count_time",
"Col_end",
"Col_beg",
"Row_end",
"Row_beg",
"Optic_used",
"Date",
"Scan_Type",
"CCD_Mode",
"CTAngle",
"Min",
"Max",
"Sub_vols",
)
valid_keys = used_keys + other_keys
valid_keys = [key.lower() for key in valid_keys]
if isinstance(scan_info, dict):
for key in scan_info.keys():
if key not in valid_keys:
_logger.warning(f"{key} unrecognized. Valid keys are {valid_keys}")
self._scan_info = scan_info
@docstring(TomoScanBase.clear_caches)
def clear_caches(self):
super().clear_caches()
self._projections = None
self.__dim1 = None
self.__dim2 = None
self.__pixel_size = None
def clear_frames_caches(self):
self._darks = None
self._flats = None
self.__tomo_n = None
self.__flat_n = None
self.__dark_n = None
self.__flat_on = None
self.__scan_range = None
super().clear_frames_caches()
@docstring(TomoScanBase.tomo_n)
@property
def tomo_n(self) -> Union[None, int]:
if self.__tomo_n is None:
self.__tomo_n = EDFTomoScan.get_tomo_n(
scan=self.path,
dataset_basename=self.dataset_basename,
scan_info=self.scan_info,
)
return self.__tomo_n
@property
@docstring(TomoScanBase.dark_n)
def dark_n(self) -> Union[None, int]:
if self.__dark_n is None:
self.__dark_n = EDFTomoScan.get_dark_n(
scan=self.path,
dataset_basename=self.dataset_basename,
scan_info=self.scan_info,
)
return self.__dark_n
@property
@docstring(TomoScanBase.flat_n)
def flat_n(self) -> Union[None, int]:
if self.__flat_n is None:
self.__flat_n = EDFTomoScan.get_ref_n(
scan=self.path,
dataset_basename=self.dataset_basename,
scan_info=self.scan_info,
)
return self.__flat_n
@property
@docstring(TomoScanBase.pixel_size)
def pixel_size(self) -> Union[None, int]:
"""
:return: pixel size
:rtype: float
"""
if self.__pixel_size is None:
self.__pixel_size = EDFTomoScan._get_pixel_size(
scan=self.path,
dataset_basename=self.dataset_basename,
scan_info=self.scan_info,
)
return self.__pixel_size
@property
def x_pixel_size(self) -> Optional[float]:
"""For EDF only square pixel size is handled"""
return self.pixel_size
@property
def y_pixel_size(self) -> Optional[float]:
"""For EDF only square pixel size is handled"""
return self.pixel_size
@property
@deprecated(replacement="", since_version="1.1.0")
def x_real_pixel_size(self) -> Union[None, float]:
if self.pixel_size is not None and self.magnification is not None:
return self.pixel_size * self.magnification
else:
return None
@property
@deprecated(replacement="", since_version="1.1.0")
def y_real_pixel_size(self) -> Union[None, float]:
if self.pixel_size is not None and self.magnification is not None:
return self.pixel_size * self.magnification
else:
return None
@property
@docstring(TomoScanBase.dim_1)
def dim_1(self) -> Union[None, int]:
"""
:return: image dim1
:rtype: int
"""
if self.__dim1 is None and self.path is not None:
self.__dim1, self.__dim2 = EDFTomoScan.get_dim1_dim2(
scan=self.path,
dataset_basename=self.dataset_basename,
scan_info=self.scan_info,
)
return self.__dim1
@property
@docstring(TomoScanBase.dim_2)
def dim_2(self) -> Union[None, int]:
"""
:return: image dim2
:rtype: int
"""
if self.__dim2 is None and self.path is not None:
self.__dim1, self.__dim2 = EDFTomoScan.get_dim1_dim2(
scan=self.path,
dataset_basename=self.dataset_basename,
scan_info=self.scan_info,
)
return self.__dim2
@property
@docstring(TomoScanBase.x_translation)
def x_translation(self) -> Union[None, tuple]:
raise NotImplementedError("Not supported for EDF")
@property
@docstring(TomoScanBase.y_translation)
def y_translation(self) -> Union[None, tuple]:
raise NotImplementedError("Not supported for EDF")
@property
@docstring(TomoScanBase.z_translation)
def z_translation(self) -> Union[None, tuple]:
raise NotImplementedError("Not supported for EDF")
@property
@docstring(TomoScanBase.ff_interval)
def ff_interval(self) -> Union[None, int]:
if self.__flat_on is None and self.path is not None:
self.__flat_on = EDFTomoScan.get_ff_interval(
scan=self.path,
dataset_basename=self.dataset_basename,
scan_info=self.scan_info,
)
return self.__flat_on
@property
@docstring(TomoScanBase.scan_range)
def scan_range(self) -> Union[None, int]:
if self.__scan_range is None and self.path is not None:
self.__scan_range = EDFTomoScan.get_scan_range(
scan=self.path,
dataset_basename=self.dataset_basename,
scan_info=self.scan_info,
)
return self.__scan_range
@property
@docstring(TomoScanBase.flats)
def flats(self) -> Union[None, dict]:
"""
flats are given as a dictionary with index as key and DataUrl as
value"""
if self._flats is None and self.path is not None:
self._flats = self.get_flats_url(
scan_path=self.path,
dataset_basename=self.dataset_basename,
)
return self._flats
@property
@docstring(TomoScanBase.projections)
def projections(self) -> Union[None, dict]:
if self._projections is None and self.path is not None:
self._reload_projections()
return self._projections
@property
@docstring(TomoScanBase.alignment_projections)
def alignment_projections(self) -> None:
if self._alignment_projections is None and self.path is not None:
self._reload_projections()
return self._alignment_projections
@docstring(TomoScanBase.is_tomoscan_dir)
@staticmethod
def is_tomoscan_dir(
directory: str, dataset_basename: Optional[str] = None, **kwargs
) -> bool:
return os.path.isfile(
EDFTomoScan.get_info_file(
directory=directory, dataset_basename=dataset_basename, kwargs=kwargs
)
)
@staticmethod
def get_info_file(
directory: str, dataset_basename: Optional[str] = None, **kwargs
) -> str:
if dataset_basename is None:
dataset_basename = os.path.basename(directory)
assert dataset_basename != ""
info_file = os.path.join(directory, dataset_basename + EDFTomoScan.INFO_EXT)
if "src_pattern" in kwargs and kwargs["src_pattern"] is not None:
assert "dest_pattern" in kwargs
info_file = info_file.replace(
kwargs["src_pattern"], kwargs["dest_pattern"], 1
)
return info_file
@docstring(TomoScanBase.is_abort)
def is_abort(self, **kwargs) -> bool:
abort_file = self.dataset_basename + self.ABORT_FILE
abort_file = os.path.join(self.path, abort_file)
if "src_pattern" in kwargs and kwargs["src_pattern"] is not None:
assert "dest_pattern" in kwargs
abort_file = abort_file.replace(
kwargs["src_pattern"], kwargs["dest_pattern"]
)
return os.path.isfile(abort_file)
@property
@docstring(TomoScanBase.darks)
def darks(self) -> dict:
if self._darks is None and self.path is not None:
self._darks = self.get_darks_url(
scan_path=self.path, dataset_basename=self.dataset_basename
)
return self._darks
@docstring(TomoScanBase.get_proj_angle_url)
def get_proj_angle_url(self) -> dict:
# TODO: we might use fabio.open_serie instead
if self.path is None:
_logger.warning(
"no path specified for scan, unable to retrieve the projections"
)
return {}
n_projection = self.tomo_n
data_urls = EDFTomoScan.get_proj_urls(
self.path, dataset_basename=self.dataset_basename
)
return TomoScanBase.map_urls_on_scan_range(
urls=data_urls, n_projection=n_projection, scan_range=self.scan_range
)
@docstring(TomoScanBase.update)
def update(self):
if self.path is not None:
self._reload_projections()
self._darks = EDFTomoScan.get_darks_url(self.path)
self._flats = EDFTomoScan.get_flats_url(self.path)
@docstring(TomoScanBase.load_from_dict)
def load_from_dict(self, desc: Union[dict, io.TextIOWrapper]):
if isinstance(desc, io.TextIOWrapper):
data = json.load(desc)
else:
data = desc
if not (self.DICT_TYPE_KEY in data and data[self.DICT_TYPE_KEY] == self._TYPE):
raise ValueError("Description is not an EDFScan json description")
assert self.DICT_PATH_KEY in data
self.path = data[self.DICT_PATH_KEY]
return self
@staticmethod
def get_proj_urls(
scan: str,
dataset_basename: Optional[str] = None,
n_frames: Union[int, None] = None,
) -> dict:
"""
Return the dict of radios / projection for the given scan.
Keys of the dictionary is the slice number
Return all the file on the root of scan starting by the name of scan and
ending by .edf
:param scan: is the path to the folder of acquisition
:type scan: str
:param n_frames: Number of frames in each EDF file.
If not provided, it is inferred by reading each file.
:type n_frames: int
:return: dict of radios files with radio index as key and file as value
:rtype: dict
"""
urls = dict({})
if (scan is None) or not (os.path.isdir(scan)):
return urls
if dataset_basename is None:
dataset_basename = os.path.basename(scan)
if os.path.isdir(scan):
for f in os.listdir(scan):
if EDFTomoScan.is_a_proj_path(
fileName=f, dataset_basename=dataset_basename, scanID=scan
):
gfile = os.path.join(scan, f)
index = EDFTomoScan.guess_index_frm_file_name(
gfile, basename=dataset_basename
)
urls.update(
extract_urls_from_edf(
start_index=index, file_=gfile, n_frames=n_frames
)
)
return urls
@staticmethod
def is_a_proj_path(
fileName: str, scanID: str, dataset_basename: Optional[str] = None
) -> bool:
"""Return True if the given fileName can fit to a Radio name"""
fileBasename = os.path.basename(fileName)
if dataset_basename is None:
dataset_basename = os.path.basename(scanID)
if fileBasename.endswith(".edf") and fileBasename.startswith(dataset_basename):
localstring = fileName.rstrip(".edf")
# remove the scan
localstring = re.sub(dataset_basename, "", localstring)
if "slice_" in localstring:
# case of a reconstructed file
return False
if "refHST" in localstring:
return False
s = localstring.split("_")
if s[-1].isdigit():
# check that the next value is a digit
return True
return False
@staticmethod
def guess_index_frm_file_name(_file: str, basename: str) -> Union[None, int]:
"""
Guess the index of the file. Index is most of the an integer but can be
a float for 'ref' for example if several are taken.
:param _file:
:param basename:
"""
def extract_index(my_str, type_):
res = []
modified_str = copy.copy(my_str)
while modified_str != "" and modified_str[-1].isdigit():
res.append(modified_str[-1])
modified_str = modified_str[:-1]
if len(res) == 0:
return None, modified_str
else:
orignalOrder = res[::-1]
if type_ is int:
return int("".join(orignalOrder)), modified_str
else:
return float(".".join(("0", "".join(orignalOrder)))), modified_str
_file = os.path.basename(_file)
if _file.endswith(".edf"):
name = _file.replace(basename, "", 1)
name = name.rstrip(".edf")
part_1, name = extract_index(name, type_=int)
if name.endswith("_"):
name = name.rstrip("_")
part_2, name = extract_index(name, type_=float)
else:
part_2 = None
if part_1 is None:
return None
if part_2 is None:
if part_1 is None:
return None
else:
return int(part_1)
else:
return float(part_1) + part_2
else:
raise ValueError("only edf files are managed")
@staticmethod
def get_tomo_n(
scan: str,
dataset_basename: Optional[str] = None,
scan_info: Optional[dict] = None,
) -> Union[None, int]:
return EDFTomoScan.retrieve_information(
scan=os.path.abspath(scan),
dataset_basename=dataset_basename,
ref_file=None,
key="TOMO_N",
type_=int,
key_aliases=["tomo_N", "Tomo_N"],
scan_info=scan_info,
)
@staticmethod
def get_dark_n(
scan: str,
dataset_basename: Optional[str] = None,
scan_info: Optional[dict] = None,
) -> Union[None, int]:
return EDFTomoScan.retrieve_information(
scan=os.path.abspath(scan),
dataset_basename=dataset_basename,
ref_file=None,
key="DARK_N",
type_=int,
key_aliases=[
"dark_N",
],
scan_info=scan_info,
)
@staticmethod
def get_ref_n(
scan: str,
dataset_basename: Optional[str] = None,
scan_info: Optional[dict] = None,
) -> Union[None, int]:
return EDFTomoScan.retrieve_information(
scan=os.path.abspath(scan),
dataset_basename=dataset_basename,
ref_file=None,
key="REF_N",
type_=int,
key_aliases=[
"ref_N",
],
scan_info=scan_info,
)
@staticmethod
def get_ff_interval(
scan: str,
dataset_basename: Optional[str] = None,
scan_info: Optional[dict] = None,
) -> Union[None, int]:
return EDFTomoScan.retrieve_information(
scan=os.path.abspath(scan),
dataset_basename=dataset_basename,
ref_file=None,
key="REF_ON",
type_=int,
key_aliases=[
"ref_On",
],
scan_info=scan_info,
)
@staticmethod
def get_scan_range(
scan: str,
dataset_basename: Optional[str] = None,
scan_info: Optional[dict] = None,
) -> Union[None, int]:
return EDFTomoScan.retrieve_information(
scan=os.path.abspath(scan),
dataset_basename=dataset_basename,
ref_file=None,
key="ScanRange",
type_=int,
key_aliases=[
"scanRange",
],
scan_info=scan_info,
)
@staticmethod
def get_dim1_dim2(
scan: str,
dataset_basename: Optional[str] = None,
scan_info: Optional[dict] = None,
) -> Union[None, tuple]:
d1 = EDFTomoScan.retrieve_information(
scan=os.path.abspath(scan),
dataset_basename=dataset_basename,
ref_file=None,
key="Dim_1",
key_aliases=["projectionSize/DIM_1"],
type_=int,
scan_info=scan_info,
)
d2 = EDFTomoScan.retrieve_information(
scan=os.path.abspath(scan),
dataset_basename=dataset_basename,
ref_file=None,
key="Dim_2",
key_aliases=["projectionSize/DIM_2"],
type_=int,
)
return d1, d2
@property
@docstring(TomoScanBase.instrument_name)
def instrument_name(self) -> Union[None, str]:
"""
:return: instrument name
"""
return None
@property
@docstring(TomoScanBase.distance)
def distance(self) -> Union[None, float]:
if self.__distance is None:
self.__distance = EDFTomoScan.retrieve_information(
self.path,
dataset_basename=self.dataset_basename,
ref_file=None,
key="Distance",
type_=float,
key_aliases=("distance",),
scan_info=self.scan_info,
)
if self.__distance is None:
return None
else:
return self.__distance * MetricSystem.MILLIMETER.value
@property
@docstring(TomoScanBase.field_of_view)
def field_of_view(self):
# not managed for EDF files
return None
@property
@docstring(TomoScanBase.estimated_cor_frm_motor)
def estimated_cor_frm_motor(self):
# not managed for EDF files
return None
@property
@docstring(TomoScanBase.energy)
def energy(self):
if self.__energy is None:
self.__energy = EDFTomoScan.retrieve_information(
self.path,
dataset_basename=self.dataset_basename,
ref_file=None,
key="Energy",
type_=float,
key_aliases=("energy",),
scan_info=self.scan_info,
)
return self.__energy
@property
def count_time(self) -> Union[list, None]:
if self._count_time is None:
count_time = EDFTomoScan.retrieve_information(
self.path,
dataset_basename=self.dataset_basename,
ref_file=None,
key="Count_time",
type_=float,
key_aliases=("CountTime"),
scan_info=self.scan_info,
)
if count_time is not None:
if self.tomo_n is not None:
self._count_time = [count_time] * self.tomo_n
else:
self._count_time = count_time
return self._count_time
@property
@docstring(TomoScanBase.electric_current)
def electric_current(self) -> tuple:
if self._electric_current is None:
electric_current = EDFTomoScan.retrieve_information(
self.path,
dataset_basename=self.dataset_basename,
ref_file=None,
key="SrCurrent",
type_=float,
key_aliases=("SRCUR", "machineCurrentStart"),
scan_info=self.scan_info,
)
if electric_current is not None:
if self.tomo_n is not None:
self._electric_current = [electric_current] * self.tomo_n
else:
self._electric_current = electric_current
return self._electric_current
@staticmethod
def _get_pixel_size(
scan: str,
dataset_basename: Optional[str] = None,
scan_info: Optional[dict] = None,
) -> Union[None, float]:
if os.path.isdir(scan) is False:
return None
value = EDFTomoScan.retrieve_information(
scan=scan,
dataset_basename=dataset_basename,
ref_file=None,
key="PixelSize",
type_=float,
key_aliases=[
"pixelSize",
],
scan_info=scan_info,
)
if value is None:
parFile = os.path.join(scan, scan.dataset_basename + ".par")
if os.path.exists(parFile):
try:
ddict = get_parameters_frm_par_or_info(parFile)
except ValueError as e:
_logger.error(e)
if "IMAGE_PIXEL_SIZE_1".lower() in ddict:
value = float(ddict["IMAGE_PIXEL_SIZE_1".lower()])
# for now pixel size are stored in microns.
# We want to return them in meter
if value is not None:
return value * MetricSystem.MICROMETER.value
else:
return None
@staticmethod
def get_darks_url(
scan_path: str,
dataset_basename: Optional[str] = None,
prefix: str = "dark",
file_ext: str = ".edf",
) -> dict:
"""
:param scan_path:
:type scan_path: str
:param prefix: flat file prefix
:type prefix: str
:param file_ext: flat file extension
:type file_ext: str
:return: list of flat frames as silx's `DataUrl`
"""
res = {}
if os.path.isdir(scan_path) is False:
_logger.error(
scan_path + " is not a directory. Cannot extract " "DarkHST files"
)
return res
if dataset_basename is None:
dataset_basename = os.path.basename(scan_path)
for file_ in os.listdir(scan_path):
_prefix = prefix
if prefix.endswith(file_ext):
_prefix = prefix.rstrip(file_ext)
if file_.startswith(_prefix) and file_.endswith(file_ext):
# usuelly the dark file name should be dark.edf, but some
# darkHSTXXXX remains...
file_fp = file_.lstrip(_prefix).rstrip(file_ext).lstrip("HST")
if file_fp == "" or file_fp.isnumeric() is True:
index = EDFTomoScan.guess_index_frm_file_name(
_file=file_, basename=dataset_basename
)
urls = extract_urls_from_edf(
os.path.join(scan_path, file_), start_index=index
)
res.update(urls)
return res
@staticmethod
def get_flats_url(
scan_path: str,
dataset_basename: Optional[str] = None,
prefix: str = "refHST",
file_ext: str = ".edf",
ignore=None,
) -> dict:
"""
:param scan_path:
:type scan_path: str
:param prefix: flat frame file prefix
:type prefix: str
:param file_ext: flat frame file extension
:type file_ext: str
:return: list of refs as silx's `DataUrl`
"""
res = {}
if os.path.isdir(scan_path) is False:
_logger.error(
scan_path + " is not a directory. Cannot extract " "RefHST files"
)
return res
def get_next_free_index(key, keys):
"""return next free key from keys by converting it to a string
with `key_value (n)` after it
"""
new_key = key
index = 2
while new_key in keys:
new_key = "%s (%s)" % (key, index)
index += 1
return new_key
def ignore_file(file_name, to_ignore):
if to_ignore is None:
return False
for pattern in to_ignore:
if pattern in file_name:
return True
return False
if dataset_basename is None:
dataset_basename = os.path.basename(scan_path)
for file_ in os.listdir(scan_path):
if (
file_.startswith(prefix)
and file_.endswith(file_ext)
and not ignore_file(file_, ignore)
):
index = EDFTomoScan.guess_index_frm_file_name(
_file=file_,
basename=dataset_basename,
)
file_fp = os.path.join(scan_path, file_)
urls = extract_urls_from_edf(start_index=index, file_=file_fp)
for key in urls:
if key in res:
key_ = get_next_free_index(key, res.keys())
else:
key_ = key
res[key_] = urls[key]
return res
@property
def x_flipped(self) -> bool:
return None
@property
def y_flipped(self) -> bool:
return None
def _reload_projections(self):
if self.path is None:
return None
else:
all_projections = EDFTomoScan.get_proj_urls(
self.path,
n_frames=self._edf_n_frames,
dataset_basename=self.dataset_basename,
)
def select_proj(ddict, from_, to_):
indexes = sorted(set(ddict.keys()))
sel_indexes = indexes[from_:to_]
res = {}
for index in sel_indexes:
res[index] = ddict[index]
return res
if self.tomo_n is not None and len(all_projections) > self.tomo_n:
self._projections = select_proj(all_projections, 0, self.tomo_n)
self._alignment_projections = select_proj(
all_projections, self.tomo_n, None
)
else:
self._projections = all_projections
self._alignment_projections = {}
if self.ignore_projections is not None:
for idx in self.ignore_projections:
self._projections.pop(idx, None)
@staticmethod
def retrieve_information(
scan: str,
dataset_basename: Optional[str],
ref_file: Union[str, None],
key: str,
type_: type,
key_aliases: Union[list, tuple, None] = None,
scan_info: Optional[dict] = None,
):
"""
Try to retrieve information a .info file, an .xml or a flat field file.
file.
Look for the key 'key' or one of it aliases.
:param scan: root folder of an acquisition. Must be an absolute path
:param ref_file: the refXXXX_YYYY which should contain information
about the scan. Ref in esrf reference is a flat.
:param key: the key (information) we are looking for
:type key: str
:param type_: requestde out type if the information is found
:type type_: return type if the information is found.
:param key_aliases: aliases of the key in the different file
:type key_aliases: list
:param scan_info: dict containing keys that could overwrite .info file content
:type dict:
:return: the requested information or None if not found
"""
info_aliases = [key]
if key_aliases is not None:
assert type(key_aliases) in (tuple, list)
[info_aliases.append(alias) for alias in key_aliases]
if scan_info is not None:
if key in scan_info:
return scan_info[key]
elif key.lower() in scan_info:
return scan_info[key.lower()]
if not os.path.isdir(scan):
return None
# 1st look for ref file if any given
def parseRefFile(filePath):
with fabio.open(filePath) as ref_file:
header = ref_file.header
for k in key_aliases:
if k in header:
return type_(header[k])
return None
if ref_file is not None and os.path.isfile(ref_file):
try:
info = parseRefFile(ref_file)
except IOError as e:
_logger.warning(e)
else:
if info is not None:
return info
# 2nd look for .info file
def parseInfoFile(filePath):
def extractInformation(text, alias):
text = text.replace(alias, "")
text = text.replace("\n", "")
text = text.replace(" ", "")
text = text.replace("=", "")
return type_(text)
info = None
f = open(filePath, "r")
try:
line = f.readline()
while line:
for alias in info_aliases:
if alias in line:
info = extractInformation(line, alias)
break
line = f.readline()
finally:
f.close()
return info
if dataset_basename is None:
dataset_basename = os.path.basename(scan)
infoFiles = [os.path.join(scan, dataset_basename + ".info")]
infoOnDataVisitor = infoFiles[0].replace("lbsram", "")
# hack to check in lbsram, would need to be removed to add some consistency
if os.path.isfile(infoOnDataVisitor):
infoFiles.append(infoOnDataVisitor)
for infoFile in infoFiles:
if os.path.isfile(infoFile) is True:
info = parseInfoFile(infoFile)
if info is not None:
return info
# 3td look for xml files
def parseXMLFile(filePath):
try:
for alias in info_aliases:
tree = etree.parse(filePath)
elmt = tree.find("acquisition/" + alias)
if elmt is None:
continue
else:
info = type_(elmt.text)
if info == -1:
return None
else:
return info
except etree.XMLSyntaxError as e:
_logger.warning(e)
return None
xmlFiles = [os.path.join(scan, dataset_basename + ".xml")]
xmlOnDataVisitor = xmlFiles[0].replace("lbsram", "")
# hack to check in lbsram, would need to be removed to add some consistency
if os.path.isfile(xmlOnDataVisitor):
xmlFiles.append(xmlOnDataVisitor)
for xmlFile in xmlFiles:
if os.path.isfile(xmlFile) is True:
info = parseXMLFile(xmlFile)
if info is not None:
return info
return None
def get_range(self):
if self.path is not None:
return self.get_scan_range(self.path, self.scan_info)
else:
return None
def get_flat_expected_location(self):
return os.path.join(self.dataset_basename, "refHST[*].edf")
def get_dark_expected_location(self):
return os.path.join(self.dataset_basename, "dark[*].edf")
def get_projection_expected_location(self):
return os.path.join(os.path.basename(self.path), self.dataset_basename, "*.edf")
def _get_info_file_path_short_name(self):
info_file = self.get_info_file_path(scan=self)
return os.path.join(
os.path.basename(os.path.dirname(info_file)), self.dataset_basename
)
def get_energy_expected_location(self):
return "::".join((self._get_info_file_path_short_name(), "Energy"))
def get_distance_expected_location(self):
return "::".join((self._get_info_file_path_short_name(), "Distance"))
def get_pixel_size_expected_location(self):
return "::".join((self._get_info_file_path_short_name(), "PixelSize"))
@staticmethod
def get_info_file_path(scan):
if not isinstance(scan, EDFTomoScan):
raise TypeError(f"{scan} is expected to be an {EDFTomoScan}")
if scan.path is None:
return None
scan_path = os.path.abspath(scan.path)
return os.path.join(scan_path, scan.dataset_basename + ".info")
def __str__(self):
return f" edf scan({os.path.basename(os.path.abspath(self.path))})"
@docstring(TomoScanBase.get_relative_file)
def get_relative_file(
self, file_name: str, with_dataset_prefix=True
) -> Optional[str]:
if self.path is not None:
if with_dataset_prefix:
basename = self.dataset_basename
basename = "_".join((basename, file_name))
return os.path.join(self.path, basename)
else:
return os.path.join(self.path, file_name)
else:
return None
def get_dataset_basename(self) -> str:
return self.dataset_basename
@property
def dataset_basename(self) -> Optional[str]:
if self._dataset_basename is not None:
return self._dataset_basename
elif self.path is None:
return None
else:
return os.path.basename(self.path)
@docstring(TomoScanBase)
def save_reduced_darks(
self,
darks: dict,
output_urls: tuple = REDUCED_DARKS_DATAURLS,
darks_infos=None,
metadata_output_urls=REDUCED_DARKS_METADATAURLS,
):
if len(darks) > 1:
_logger.warning(
"EDFTomoScan expect at most one dark. Only one will be save"
)
super().save_reduced_darks(
darks=darks,
output_urls=output_urls,
darks_infos=darks_infos,
metadata_output_urls=metadata_output_urls,
)
@docstring(TomoScanBase)
def load_reduced_darks(
self,
inputs_urls: tuple = REDUCED_DARKS_DATAURLS,
metadata_input_urls: tuple = REDUCED_DARKS_METADATAURLS,
return_as_url: bool = False,
return_info: bool = False,
) -> dict:
darks = super().load_reduced_darks(
inputs_urls=inputs_urls,
metadata_input_urls=metadata_input_urls,
return_as_url=return_as_url,
return_info=return_info,
)
# for edf we don't expect dark to have a index and we set it by default at frame index 0
if None in darks:
dark_frame = darks[None]
del darks[None]
if 0 in darks:
_logger.warning("Two frame found for index 0")
else:
darks[0] = dark_frame
return darks
@docstring(TomoScanBase)
def save_reduced_flats(
self,
flats: dict,
output_urls: tuple = REDUCED_FLATS_DATAURLS,
flats_infos=None,
metadata_output_urls=REDUCED_FLATS_METADATAURLS,
) -> dict:
super().save_reduced_flats(
flats=flats,
output_urls=output_urls,
flats_infos=flats_infos,
metadata_output_urls=metadata_output_urls,
)
@docstring(TomoScanBase)
def load_reduced_flats(
self,
inputs_urls: tuple = REDUCED_FLATS_DATAURLS,
metadata_input_urls: tuple = REDUCED_FLATS_METADATAURLS,
return_as_url: bool = False,
return_info=False,
) -> dict:
return super().load_reduced_flats(
inputs_urls=inputs_urls,
return_as_url=return_as_url,
return_info=return_info,
metadata_input_urls=metadata_input_urls,
)
@docstring(TomoScanBase.compute_reduced_flats)
def compute_reduced_flats(
self,
reduced_method="median",
overwrite=True,
output_dtype=numpy.int32,
return_info=False,
):
return super().compute_reduced_flats(
reduced_method=reduced_method,
overwrite=overwrite,
output_dtype=output_dtype,
return_info=return_info,
)
@docstring(TomoScanBase.compute_reduced_flats)
def compute_reduced_darks(
self,
reduced_method="mean",
overwrite=True,
output_dtype=numpy.uint16,
return_info=False,
):
return super().compute_reduced_darks(
reduced_method=reduced_method,
overwrite=overwrite,
output_dtype=output_dtype,
return_info=return_info,
)
@staticmethod
def from_identifier(identifier):
"""Return the Dataset from a identifier"""
if not isinstance(identifier, EDFTomoScanIdentifier):
raise TypeError(
f"identifier should be an instance of {EDFTomoScanIdentifier} not {type(identifier)}"
)
return EDFTomoScan(scan=identifier.folder)
@docstring(TomoScanBase)
def get_identifier(self) -> ScanIdentifier:
return EDFTomoScanIdentifier(
object=self, folder=self.path, file_prefix=self.dataset_basename
)
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.161433
tomoscan-1.2.2/tomoscan/esrf/scan/framereducer/ 0000755 0236253 0006511 00000000000 00000000000 021645 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/scan/framereducer/__init__.py 0000644 0236253 0006511 00000000166 00000000000 023761 0 ustar 00payno soft 0000000 0000000 from .hdf5framereducer import HDF5FrameReducer # noqa F401
from .edfframereducer import EDFFrameReducer # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/scan/framereducer/edfframereducer.py 0000644 0236253 0006511 00000061403 00000000000 025346 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "04/01/2022"
from typing import Optional
from tomoscan.framereducerbase import (
REDUCER_TARGET,
FrameReducerBase,
ReduceMethod,
)
from tomoscan.scanbase import TomoScanBase, ReducedFramesInfos
from lxml import etree
import os
import numpy
import fabio
import logging
import re
from glob import glob
_logger = logging.getLogger(__name__)
class EDFFrameReducer(FrameReducerBase):
RAW_FLAT_RE = "ref*.*[0-9]{3,4}_[0-9]{3,4}"
"""regular expression to discover flat files"""
RAW_DARK_RE = "darkend[0-9]{3,4}"
"""regular expression to discover raw dark files"""
REFHST_PREFIX = "refHST"
DARKHST_PREFIX = "dark.edf"
def __init__(
self,
scan: TomoScanBase,
reduced_method: ReduceMethod,
target: REDUCER_TARGET,
output_dtype: Optional[numpy.dtype],
input_flat_pattern=RAW_FLAT_RE,
input_dark_pattern=RAW_DARK_RE,
flat_output_prefix=REFHST_PREFIX,
dark_output_prefix=DARKHST_PREFIX,
overwrite=False,
file_ext=".edf",
):
super().__init__(
scan, reduced_method, target, overwrite=overwrite, output_dtype=output_dtype
)
self._input_flat_pattern = input_flat_pattern
self._input_dark_pattern = input_dark_pattern
self._dark_output_prefix = dark_output_prefix
self._flat_output_prefix = flat_output_prefix
self._file_ext = file_ext
@property
def input_flat_pattern(self):
return self._input_flat_pattern
@property
def input_dark_pattern(self):
return self._input_dark_pattern
@staticmethod
def _getInformation(scan, refFile, information, _type, aliases=None):
"""
Parse files contained in the given directory to get the requested
information
:param scan: directory containing the acquisition. Must be an absolute path
:param refFile: the refXXXX_YYYY which should contain information about the
scan.
:return: the requested information or None if not found
"""
def parseRefFile(filePath):
header = fabio.open(filePath).header
for k in aliases:
if k in header:
return _type(header[k])
return None
def parseXMLFile(filePath):
try:
for alias in info_aliases:
tree = etree.parse(filePath)
elmt = tree.find("acquisition/" + alias)
if elmt is None:
continue
else:
info = _type(elmt.text)
if info == -1:
return None
else:
return info
except etree.XMLSyntaxError as e:
_logger.warning(e)
return None
def parseInfoFile(filePath):
def extractInformation(text, alias):
text = text.replace(alias, "")
text = text.replace("\n", "")
text = text.replace(" ", "")
text = text.replace("=", "")
return _type(text)
info = None
f = open(filePath, "r")
line = f.readline()
while line:
for alias in info_aliases:
if alias in line:
info = extractInformation(line, alias)
break
line = f.readline()
f.close()
return info
info_aliases = [information]
if aliases is not None:
assert type(aliases) in (tuple, list)
[info_aliases.append(alias) for alias in aliases]
if not os.path.isdir(scan):
return None
if refFile is not None and os.path.isfile(refFile):
try:
info = parseRefFile(refFile)
except IOError as e:
_logger.warning(e)
else:
if info is not None:
return info
baseName = os.path.basename(scan)
infoFiles = [os.path.join(scan, baseName + ".info")]
infoOnDataVisitor = infoFiles[0].replace("lbsram", "", 1)
# hack to check in lbsram, would need to be removed to add some consistency
if os.path.isfile(infoOnDataVisitor):
infoFiles.append(infoOnDataVisitor)
for infoFile in infoFiles:
if os.path.isfile(infoFile) is True:
info = parseInfoFile(infoFile)
if info is not None:
return info
xmlFiles = [os.path.join(scan, baseName + ".xml")]
xmlOnDataVisitor = xmlFiles[0].replace("lbsram", "", 1)
# hack to check in lbsram, would need to be removed to add some consistency
if os.path.isfile(xmlOnDataVisitor):
xmlFiles.append(xmlOnDataVisitor)
for xmlFile in xmlFiles:
if os.path.isfile(xmlFile) is True:
info = parseXMLFile(xmlFile)
if info is not None:
return info
return None
@staticmethod
def getDARK_N(scan):
return EDFFrameReducer._getInformation(
os.path.abspath(scan),
refFile=None,
information="DARK_N",
_type=int,
aliases=["dark_N"],
)
@staticmethod
def getTomo_N(scan):
return EDFFrameReducer._getInformation(
os.path.abspath(scan),
refFile=None,
information="TOMO_N",
_type=int,
aliases=["tomo_N"],
)
@staticmethod
def get_closest_SR_current(scan_dir, refFile=None):
"""
Parse files contained in the given directory to get information about the
incoming energy for the serie `iSerie`
:param scan_dir: directory containing the acquisition
:param refFile: the refXXXX_YYYY which should contain information about the
energy.
:return: the energy in keV or none if no energy found
"""
return EDFFrameReducer._getInformation(
os.path.abspath(scan_dir),
refFile,
information="SrCurrent",
aliases=["SRCUR", "machineCurrentStart"],
_type=float,
)
@staticmethod
def get_closest_count_time(scan_dir, refFile=None):
return EDFFrameReducer._getInformation(
os.path.abspath(scan_dir),
refFile,
information="Count_time",
aliases=tuple(),
_type=float,
)
def get_info(self, keyword: str):
with open(self.infofile) as file:
infod = file.readlines()
for line in infod:
if keyword in line:
return int(line.split("=")[1])
# not found:
return 0
def run(self) -> dict:
self._raw_darks = []
self._raw_flats = []
infos = ReducedFramesInfos()
directory = self.scan.path
res = {}
if not self.preprocess():
_logger.warning(f"preprocessing of {self.scan} failed")
else:
_logger.info(f"start proccess darks and flat fields for {self.scan}")
if self.reduced_method is ReduceMethod.NONE:
return None
shape = fabio.open(self.filelist_fullname[0]).shape
for i in range(len(self.serievec)):
largeMat = numpy.zeros(
(self.nframes * self.nFilePerSerie, shape[0], shape[1])
)
if (
self.reducer_target is REDUCER_TARGET.DARKS
and len(self.serievec) == 1
):
fileName = self.out_prefix
if fileName.endswith(self._file_ext) is False:
fileName = fileName + self._file_ext
else:
fileName = (
self.out_prefix.rstrip(self._file_ext)
+ self.serievec[i]
+ self._file_ext
)
fileName = os.path.join(directory, fileName)
if os.path.isfile(fileName):
if self.overwrite is False:
_logger.info("skip creation of %s, already existing" % fileName)
continue
if self.nFilePerSerie == 1:
fSerieName = os.path.join(directory, self.series[i])
header = {"method": self.reduced_method.name + " on 1 image"}
header["SRCUR"] = self.get_closest_SR_current(
scan_dir=directory, refFile=fSerieName
)
header["Count_time"] = self.get_closest_count_time(
scan_dir=directory,
refFile=fSerieName,
)
if self.nframes == 1:
largeMat[0] = fabio.open(fSerieName).data
else:
handler = fabio.open(fSerieName)
dShape = (self.nframes, handler.dim2, handler.dim1)
largeMat = numpy.zeros(dShape)
for iFrame in range(self.nframes):
largeMat[iFrame] = handler.getframe(iFrame).data
else:
header = {
"method": self.reduced_method.name
+ " on %d images" % self.nFilePerSerie
}
header["SRCUR"] = self.get_closest_SR_current(
scan_dir=directory, refFile=self.series[i][0]
)
header["Count_time"] = self.get_closest_count_time(
scan_dir=directory,
refFile=self.series[i][0],
)
for j, fName in zip(
range(self.nFilePerSerie), self.filesPerSerie[self.serievec[i]]
):
file_BigMat = fabio.open(fName)
if self.nframes > 1:
for fr in range(self.nframes):
jfr = fr + j * self.nframes
largeMat[jfr] = file_BigMat.getframe(fr).getData()
else:
largeMat[j] = file_BigMat.data
# update electrical machine current
if header["SRCUR"] is not None:
if infos.machine_electric_current is None:
infos.machine_electric_current = []
infos.machine_electric_current.append(header["SRCUR"])
if header["Count_time"] is not None:
if infos.count_time is None:
infos.count_time = []
infos.count_time.append(header["Count_time"])
if self.reduced_method is ReduceMethod.MEDIAN:
data = numpy.median(largeMat, axis=0)
elif self.reduced_method is ReduceMethod.MEAN:
data = numpy.mean(largeMat, axis=0)
elif self.reduced_method is ReduceMethod.FIRST:
data = largeMat[0]
elif self.reduced_method is ReduceMethod.LAST:
data = largeMat[-1]
elif self.reduced_method is ReduceMethod.NONE:
return
else:
raise ValueError(
"Unrecognized calculation type request {}"
"".format(self.reduced_method)
)
if (
self.reducer_target is REDUCER_TARGET.DARKS and self.nacq > 1
): # and self.nframes == 1:
nacq = self.getDARK_N(directory) or 1
data = data / nacq
if self.output_dtype is not None:
data = data.astype(self.output_dtype)
file_desc = fabio.edfimage.EdfImage(data=data, header=header)
res[int(self.serievec[i])] = data
i += 1
file_desc.write(fileName)
_logger.info("end proccess darks and flat fields")
return res, infos
def preprocess(self):
# start setup function
if self.reduced_method is ReduceMethod.NONE:
return False
if self.reducer_target is REDUCER_TARGET.DARKS:
self.out_prefix = self._dark_output_prefix
self.info_nacq = "DARK_N"
else:
self.out_prefix = self._flat_output_prefix
self.info_nacq = "REF_N"
# init
self.nacq = 0
"""Number of acquisition runned"""
self.files = 0
"""Ref or dark files"""
self.nframes = 1
"""Number of frame per ref/dark file"""
self.serievec = ["0000"]
"""List of series discover"""
self.filesPerSerie = {}
"""Dict with key the serie id and values list of files to compute
for median or mean"""
self.infofile = ""
"""info file of the acquisition"""
# sample/prefix and info file
directory = self.scan.path
self.prefix = os.path.basename(directory)
extensionToTry = (".info", "0000.info")
for extension in extensionToTry:
infoFile = os.path.join(directory, self.prefix + extension)
if os.path.exists(infoFile):
self.infofile = infoFile
break
if self.infofile == "":
_logger.debug(f"fail to found .info file for {self.scan}")
"""
Set filelist
"""
# do the job only if not already done and overwrite not asked
self.out_files = sorted(glob(directory + os.sep + "*." + self._file_ext))
self.filelist_fullname = self.get_originals()
self.fileNameList = []
[
self.fileNameList.append(os.path.basename(_file))
for _file in self.filelist_fullname
]
self.fileNameList = sorted(self.fileNameList)
self.nfiles = len(self.filelist_fullname)
# if nothing to process
if self.nfiles == 0:
_logger.info(
"no %s for %s, because no file to compute found"
% (self.reducer_target, directory)
)
return False
self.fid = fabio.open(self.filelist_fullname[0])
self.nframes = self.fid.nframes
self.nacq = 0
# get the info of number of acquisitions
if self.infofile != "":
self.nacq = self.get_info(self.info_nacq)
if self.nacq == 0:
self.nacq = self.nfiles
self.nseries = 1
if self.nacq > self.nfiles:
# get ready for accumulation and/or file multiimage?
self.nseries = self.nfiles
if (
self.nacq < self.nfiles
and self.get_n_digits(self.fileNameList[0], directory=directory) < 2
):
self.nFilePerSerie = self.nseries
self.serievec, self.filesPerSerie = self.preprocess_PCOTomo()
else:
self.series = self.fileNameList
self.serievec = self.get_series_value(self.fileNameList, self._file_ext)
self.filesPerSerie, self.nFilePerSerie = self.group_files_per_serie(
self.filelist_fullname, self.serievec
)
if self.filesPerSerie is not None:
for serie in self.filesPerSerie:
for _file in self.filesPerSerie[serie]:
if self.reducer_target is REDUCER_TARGET.DARKS:
self._raw_darks.append(os.path.join(self.scan.path, _file))
if self.reducer_target is REDUCER_TARGET.FLATS:
self._raw_flats.append(os.path.join(self.scan.path, _file))
return self.serievec is not None and self.filesPerSerie is not None
@staticmethod
def get_series_value(fileNames, file_ext):
assert len(fileNames) > 0
is_there_digits = len(re.findall(r"\d+", fileNames[0])) > 0
series = set()
i = 0
for fileName in fileNames:
if is_there_digits:
name = fileName.rstrip(file_ext)
file_index = name.split("_")[-1]
rm_not_numeric = re.compile(r"[^\d.]+")
file_index = rm_not_numeric.sub("", file_index)
series.add(file_index)
else:
series.add("%04d" % i)
i += 1
return list(series)
@staticmethod
def group_files_per_serie(files, series):
def findFileEndingWithSerie(poolFiles, serie):
res = []
for _file in poolFiles:
_f = _file.rstrip(".edf")
if _f.endswith(serie):
res.append(_file)
return res
def checkSeriesFilesLength(serieFiles):
length = -1
for serie in serieFiles:
if length == -1:
length = len(serieFiles[serie])
elif len(serieFiles[serie]) != length:
_logger.error("Series with inconsistant number of ref files")
assert len(series) > 0
if len(series) == 1:
return {series[0]: files}, len(files)
assert len(files) > 0
serieFiles = {}
unattributedFiles = files.copy()
for serie in series:
serieFiles[serie] = findFileEndingWithSerie(unattributedFiles, serie)
[unattributedFiles.remove(_f) for _f in serieFiles[serie]]
if len(unattributedFiles) > 0:
_logger.error("Failed to associate %s to any serie" % unattributedFiles)
return {}, 0
checkSeriesFilesLength(serieFiles)
return serieFiles, len(serieFiles[list(serieFiles.keys())[0]])
@staticmethod
def get_n_digits(_file, directory):
file_without_scanID = _file.replace(os.path.basename(directory), "", 1)
return len(re.findall(r"\d+", file_without_scanID))
def preprocess_PCOTomo(self):
filesPerSerie = {}
if self.nfiles % self.nacq == 0:
assert self.nacq < self.nfiles
self.nseries = self.nfiles // self.nacq
self.series = self.fileNameList
else:
_logger.warning("Fail to deduce series")
return None, None
linear = (
self.get_n_digits(self.fileNameList[0], directory=self.scan.scan_path) < 2
)
if linear is False:
# which digit pattern contains the file number?
lastone = True
penulti = True
for first_files in range(self.nseries - 1):
digivec_1 = re.findall(r"\d+", self.fileNameList[first_files])
digivec_2 = re.findall(r"\d+", self.fileNameList[first_files + 1])
if lastone:
lastone = (int(digivec_2[-1]) - int(digivec_1[-1])) == 0
if penulti:
penulti = (int(digivec_2[-2]) - int(digivec_1[-2])) == 0
linear = not penulti
if linear is False:
digivec_1 = re.findall(r"\d+", self.fileNameList[self.nseries - 1])
digivec_2 = re.findall(r"\d+", self.fileNameList[self.nseries])
# confirm there is 1 increment after self.nseries in the uperlast last digit patern
if (int(digivec_2[-2]) - int(digivec_1[-2])) != 1:
linear = True
# series are simple sublists in main filelist
# self.series = []
if linear is True:
is_there_digits = len(re.findall(r"\d+", self.fileNameList[0])) > 0
if is_there_digits:
serievec = set([re.findall(r"\d+", self.fileNameList[0])[-1]])
else:
serievec = set(["0000"])
for i in range(self.nseries):
if is_there_digits:
serie = re.findall(r"\d+", self.fileNameList[i * self.nacq])[-1]
serievec.add(serie)
filesPerSerie[serie] = self.fileNameList[
i * self.nacq : (i + 1) * self.nacq
]
else:
serievec.add("%04d" % i)
# in the sorted filelist, the serie is incremented, then the acquisition number:
else:
self.series = self.fileNameList[0 :: self.nseries]
serievec = set([re.findall(r"\d+", self.fileNameList[0])[-1]])
for serie in serievec:
filesPerSerie[serie] = self.fileNameList[0 :: self.nseries]
serievec = list(sorted(serievec))
if len(serievec) > 2:
_logger.error(
f"DarkRefs do not deal with multiple scan. (scan {self.scan})"
)
return None, None
assert len(serievec) <= 2
if len(serievec) > 1:
key = serievec[-1]
tomoN = self.getTomo_N(self.scan)
if tomoN is None:
_logger.error("Fail to found information %s. Can't find TOMO_N")
del serievec[-1]
serievec.append(str(tomoN).zfill(4))
filesPerSerie[serievec[-1]] = filesPerSerie[key]
del filesPerSerie[key]
assert len(serievec) == 2
assert len(filesPerSerie) == 2
return serievec, filesPerSerie
def get_originals(self) -> list:
"""compute the list of originals files to be used to compute the reducer target."""
if self.reducer_target is REDUCER_TARGET.FLATS:
try:
pattern = re.compile(self.input_flat_pattern)
except Exception:
pattern = None
_logger.error(
f"Fail to compute regular expresion for {self.input_flat_pattern}"
)
elif self.reducer_target is REDUCER_TARGET.DARKS:
re.compile(self.input_dark_pattern)
try:
pattern = re.compile(self.input_dark_pattern)
except Exception:
pattern = None
_logger.error(
f"Fail to compute regular expresion for {self.input_dark_pattern}"
)
filelist_fullname = []
if pattern is None:
return filelist_fullname
directory = self.scan.path
for file in os.listdir(directory):
if pattern.match(file) and file.endswith(self._file_ext):
if (
file.startswith(self._flat_output_prefix)
or file.startswith(self._dark_output_prefix)
) is False:
filelist_fullname.append(os.path.join(directory, file))
return sorted(filelist_fullname)
def remove_raw_files(self):
"""Remove orignals files fitting the target (dark or flat files)"""
if self.reducer_target is REDUCER_TARGET.DARKS:
# In the case originals has already been found for the median
# calculation
if len(self._raw_darks) > 0:
files = self._raw_darks
else:
files = self.get_originals()
elif self.reducer_target is REDUCER_TARGET.FLATS:
if len(self._raw_flats) > 0:
files = self._raw_flats
else:
files = self.get_originals()
else:
_logger.error(
f"the requested what (reduce {self.reducer_target}) is not recognized. "
"Can't remove corresponding file"
)
return
_files = set(files)
for _file in _files:
try:
os.remove(_file)
except Exception as e:
_logger.error(e)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/scan/framereducer/hdf5framereducer.py 0000644 0236253 0006511 00000020555 00000000000 025441 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "04/01/2022"
from silx.io.url import DataUrl
from tomoscan.framereducerbase import (
REDUCER_TARGET,
FrameReducerBase,
ReduceMethod,
)
from tomoscan.esrf.scan.utils import get_compacted_dataslices
from tomoscan.scanbase import ReducedFramesInfos
from silx.io.utils import get_data
import numpy
import logging
_logger = logging.getLogger(__name__)
class HDF5FrameReducer(FrameReducerBase):
"""Frame reducer dedicated to HDF5"""
def get_series(self, scan, target: REDUCER_TARGET) -> list:
"""
return a list of dictionary. Dictionaries keys are indexes in the acquisition.
Values are url
:param HDF5TomoScan scan: scan containing frames to reduce
:param REDUCER_TARGET target: dark of flat to be reduced
"""
target = REDUCER_TARGET.from_value(target)
if target is REDUCER_TARGET.DARKS:
raw_what = scan.darks
elif target is REDUCER_TARGET.FLATS:
raw_what = scan.flats
else:
raise ValueError(f"{target} is not handled")
if len(raw_what) == 0:
return []
else:
series = []
indexes = sorted(raw_what.keys())
# a serie is defined by contiguous indexes
current_serie = {indexes[0]: raw_what[indexes[0]]}
current_index = indexes[0]
for index in indexes[1:]:
if index == current_index + 1:
current_index = index
else:
series.append(current_serie)
current_serie = {}
current_index = index
current_serie[index] = raw_what[index]
if len(current_serie) > 0:
series.append(current_serie)
return series
def get_count_time_serie(self, indexes):
if self.scan.count_time is None:
return []
else:
return self.scan.count_time[indexes]
def get_machine_electric_current(self, indexes):
if self.scan.electric_current is None:
return []
else:
return self.scan.electric_current[indexes]
def load_data_serie(self, urls) -> dict:
"""load all urls. Trying to reduce load time by calling get_compacted_dataslices"""
# handle cases where we only have to load one frame for methods is FIRST or LAST
if self.reduced_method is ReduceMethod.FIRST and len(urls) > 0:
urls_keys = sorted(urls.keys())
urls = {
urls_keys[0]: urls[urls_keys[0]],
}
if self.reduced_method is ReduceMethod.LAST and len(urls) > 0:
urls_keys = sorted(urls.keys())
urls = {
urls_keys[-1]: urls[urls_keys[-1]],
}
# active loading
cpt_slices = get_compacted_dataslices(urls)
url_set = {}
for url in cpt_slices.values():
path = url.file_path(), url.data_path(), str(url.data_slice())
url_set[path] = url
n_elmts = 0
for url in url_set.values():
my_slice = url.data_slice()
n_elmts += my_slice.stop - my_slice.start
data = None
start_z = 0
for url in url_set.values():
my_slice = url.data_slice()
my_slice = slice(my_slice.start, my_slice.stop, 1)
new_url = DataUrl(
file_path=url.file_path(),
data_path=url.data_path(),
data_slice=my_slice,
scheme="silx",
)
loaded_data = get_data(new_url)
# init data if dim is not know
if data is None:
data = numpy.empty(
shape=(
n_elmts,
self.scan.dim_2 or loaded_data.shape[-2],
self.scan.dim_1 or loaded_data.shape[-1],
)
)
if loaded_data.ndim == 2:
data[start_z, :, :] = loaded_data
start_z += 1
elif loaded_data.ndim == 3:
delta_z = my_slice.stop - my_slice.start
data[start_z:delta_z, :, :] = loaded_data
start_z += delta_z
else:
raise ValueError("Dark and ref raw data should be 2D or 3D")
return data
def run(self) -> dict:
if self.reduced_method is ReduceMethod.MEDIAN:
method_ = numpy.median
elif self.reduced_method is ReduceMethod.MEAN:
method_ = numpy.mean
elif self.reduced_method is ReduceMethod.NONE:
return ({}, ReducedFramesInfos())
elif self.reduced_method in (ReduceMethod.FIRST, ReduceMethod.LAST):
method_ = "raw"
else:
raise ValueError(
f"Mode {self.reduced_method} for {self.reducer_target} is not managed"
)
raw_series = self.get_series(self.scan, self.reducer_target)
if len(raw_series) == 0:
_logger.warning(
f"No raw data found for {self.scan} in order to reduce {self.reducer_target}"
)
return ({}, ReducedFramesInfos())
res = {}
# res: key is serie index (first serie frame index), value is the numpy.array of the reduced frame
infos = ReducedFramesInfos()
for serie_ in raw_series:
serie_index = min(serie_)
if self.reducer_target is REDUCER_TARGET.DARKS and len(res) > 0:
continue
serie_frame_data = self.load_data_serie(serie_)
serie_count_time = self.get_count_time_serie(indexes=list(serie_.keys()))
serie_machine_electric_current = self.get_machine_electric_current(
indexes=list(serie_.keys())
)
if method_ == "raw":
# i method is raw then only the targetted frame (first or last) will be loaded
data = res[serie_index] = serie_frame_data.reshape(
-1, serie_frame_data.shape[-1]
)
if self.reduced_method is ReduceMethod.FIRST:
index_infos = 0
elif self.reduced_method is ReduceMethod.LAST:
index_infos = -1
if len(serie_machine_electric_current) > 0:
infos.machine_electric_current.append(
serie_machine_electric_current[index_infos]
)
if len(serie_count_time) > 0:
infos.count_time.append(serie_count_time[index_infos])
else:
data = method_(serie_frame_data, axis=0)
if len(serie_machine_electric_current) > 0:
infos.machine_electric_current.append(
method_(serie_machine_electric_current)
)
if len(serie_count_time) > 0:
infos.count_time.append(method_(serie_count_time))
if self.output_dtype is not None:
data = data.astype(self.output_dtype)
res[serie_index] = data
return res, infos
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/scan/hdf5scan.py 0000644 0236253 0006511 00000164274 00000000000 021264 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
"""contains HDF5TomoScan, class to be used with HDF5 acquisition and associated classes, functions."""
__authors__ = ["H.Payno"]
__license__ = "MIT"
__date__ = "09/08/2018"
from silx.utils.deprecation import deprecated
from tomoscan.esrf.scan.framereducer.hdf5framereducer import HDF5FrameReducer
from tomoscan.unitsystem import electriccurrentsystem, energysystem, timesystem
from tomoscan.scanbase import TomoScanBase, FOV
from tomoscan.scanbase import Source
import json
import io
import os
import h5py
import numpy
from silx.io.url import DataUrl
from silx.utils.enum import Enum as _Enum
from tomoscan.utils import BoundingBox1D, BoundingBox3D, docstring
from tomoscan.io import HDF5File
from tomoscan.identifier import ScanIdentifier
from tomoscan.esrf.identifier.hdf5Identifier import HDF5TomoScanIdentifier
from silx.io.utils import get_data
from tomoscan.unitsystem.unit import Unit
from tomoscan.unitsystem.metricsystem import MetricSystem
from tomoscan.nexus.paths.nxtomo import get_paths as _get_nexus_paths
from .utils import get_compacted_dataslices
from silx.io.utils import h5py_read_dataset
import typing
import logging
_logger = logging.getLogger(__name__)
class ImageKey(_Enum):
ALIGNMENT = -1
PROJECTION = 0
FLAT_FIELD = 1
DARK_FIELD = 2
INVALID = 3
@deprecated(
reason="moved", replacement="tomoscan.nexus.paths.nxtomo", since_version="0.8.0"
)
def get_nexus_paths(version: float):
return _get_nexus_paths(version=version)
class HDF5TomoScan(TomoScanBase):
"""
This is the implementation of a TomoBase class for an acquisition stored
in a HDF5 file.
For now several property of the acquisition is accessible thought a getter
(like get_scan_range) and a property (scan_range).
This is done to be compliant with TomoBase instantiation. But his will be
replace progressively by properties at the 'TomoBase' level
:param scan: scan directory or scan masterfile.h5
:param Union[str, None] entry: name of the NXtomo entry to select. If given
index is ignored.
:param Union[int, None] index: of the NXtomo entry to select. Ignored if
an entry is specified. For consistency
entries are ordered alphabetically
:param Union[float, None] nx_version: Version of the Nexus convention to use.
By default (None) it will take the latest one
"""
_NEXUS_VERSION_PATH = "version"
_TYPE = "hdf5"
_DICT_ENTRY_KEY = "entry"
SCHEME = "silx"
REDUCED_DARKS_DATAURLS = (
DataUrl(
file_path="{scan_prefix}_darks.hdf5",
data_path="{entry}/darks/{index}",
scheme=SCHEME,
),
)
REDUCED_DARKS_METADATAURLS = (
DataUrl(
file_path="{scan_prefix}_darks.hdf5",
data_path="{entry}/darks/",
scheme=SCHEME,
),
)
REDUCED_FLATS_DATAURLS = (
DataUrl(
file_path="{scan_prefix}_flats.hdf5",
data_path="{entry}/flats/{index}",
scheme=SCHEME,
),
)
REDUCED_FLATS_METADATAURLS = (
DataUrl(
file_path="{scan_prefix}_flats.hdf5",
data_path="{entry}/flats/",
scheme=SCHEME,
),
)
FRAME_REDUCER_CLASS = HDF5FrameReducer
def __init__(
self,
scan: str,
entry: str = None,
index: typing.Optional[int] = 0,
ignore_projections: typing.Optional[typing.Iterable] = None,
nx_version=None,
):
if entry is not None:
index = None
# if the user give the master file instead of the scan dir...
if scan is not None:
if not os.path.exists(scan) and "." in os.path.split(scan)[-1]:
self.master_file = scan
scan = os.path.dirname(scan)
elif os.path.isfile(scan) or ():
self.master_file = scan
scan = os.path.dirname(scan)
else:
self.master_file = self.get_master_file(scan)
else:
self.master_file = None
super(HDF5TomoScan, self).__init__(
scan=scan, type_=HDF5TomoScan._TYPE, ignore_projections=ignore_projections
)
if scan is None:
self._entry = None
else:
self._entry = entry or self._get_entry_at(
index=index, file_path=self.master_file
)
if self._entry is None:
raise ValueError(
"unable to find a valid entry for %s" % self.master_file
)
# for now the default entry is 1_tomo but should change with time
self._name = None
self._sample_name = None
self._grp_size = None
# data caches
self._projections_compacted = None
self._flats = None
self._darks = None
self._tomo_n = None
# number of projections / radios
self._dark_n = None
# number of dark image made during acquisition
self._flat_n = None
# number of flat field made during acquisition
self._scan_range = None
# scan range, in degree
self._dim_1, self._dim_2 = None, None
# image dimensions
self._x_pixel_size = None
self._y_pixel_size = None
# pixel dimensions (tuple)
self._frames = None
self._image_keys = None
self._image_keys_control = None
self._rotation_angles = None
self._distance = None
self._fov = None
self._energy = None
self._estimated_cor_frm_motor = None
self._start_time = None
self._end_time = None
self._x_translations = None
self._y_translations = None
self._z_translations = None
self._nexus_paths = None
self._nexus_version = None
self._user_nx_version = nx_version
self._source_type = None
self._source_name = None
self._instrument_name = None
@staticmethod
def get_master_file(scan_path):
if os.path.isfile(scan_path):
master_file = scan_path
else:
master_file = os.path.join(scan_path, os.path.basename(scan_path))
if os.path.exists(master_file + ".nx"):
master_file = master_file + ".nx"
elif os.path.exists(master_file + ".hdf5"):
master_file = master_file + ".hdf5"
elif os.path.exists(master_file + ".h5"):
master_file = master_file + ".h5"
else:
master_file = master_file + ".nx"
return master_file
@docstring(TomoScanBase.clear_caches)
def clear_caches(self) -> None:
self._dim_1, self._dim_2 = None, None
self._x_pixel_size = None
self._y_pixel_size = None
self._x_magnified_pixel_size = None
self._y_magnified_pixel_size = None
self._distance = None
self._fov = None
self._source = None
self._source_type = None
self._source_name = None
self._instrument_name = None
self._energy = None
self._x_flipped = None
self._y_flipped = None
super().clear_caches()
def clear_frames_caches(self):
self._projections_compacted = None
self._flats = None
self._darks = None
self._tomo_n = None
self._dark_n = None
self._flat_n = None
self._scan_range = None
self._frames = None
self._image_keys = None
self._image_keys_control = None
self._count_time = None
self._x_flipped = None
self._y_flipped = None
self._x_translations = None
self._y_translations = None
self._z_translations = None
self._rotation_angles = None
super().clear_frames_caches()
@staticmethod
def _get_entry_at(index: int, file_path: str) -> str:
"""
:param index:
:param file_path:
:return:
"""
entries = HDF5TomoScan.get_valid_entries(file_path)
if len(entries) == 0:
return None
else:
return entries[index]
@staticmethod
def get_valid_entries(file_path: str) -> tuple:
"""
return the list of 'Nxtomo' entries at the root level
:param str file_path:
:return: list of valid Nxtomo node (ordered alphabetically)
:rtype: tuple
..note: entries are sorted to insure consistency
"""
if not os.path.isfile(file_path):
raise ValueError("given file path should be a file")
def browse_group(group):
res_buf = []
for entry_alias in group.keys():
entry = group.get(entry_alias)
if isinstance(entry, h5py.Group):
if HDF5TomoScan.node_is_nxtomo(entry):
res_buf.append(entry.name)
else:
res_buf.extend(browse_group(entry))
return res_buf
with HDF5File(file_path, "r") as h5f:
res = browse_group(h5f)
res.sort()
return tuple(res)
@staticmethod
def node_is_nxtomo(node: h5py.Group) -> bool:
"""check if the given h5py node is an nxtomo node or not"""
if "NX_class" in node.attrs or "NXclass" in node.attrs:
_logger.info(node.name + " is recognized as an nx class.")
else:
_logger.info(node.name + " is node an nx class.")
return False
if "definition" in node.attrs and node.attrs["definition"].lower() == "nxtomo":
_logger.info(node.name + " is recognized as an NXtomo class.")
return True
elif (
"instrument" in node
and "NX_class" in node["instrument"].attrs
and node["instrument"].attrs["NX_class"]
in (
"NXinstrument",
b"NXinstrument",
) # b"NXinstrument" is needed for Diamond comptibility
):
return "detector" in node["instrument"]
else:
return False
@docstring(TomoScanBase.is_tomoscan_dir)
@staticmethod
def is_tomoscan_dir(directory: str, **kwargs) -> bool:
if os.path.isfile(directory):
master_file = directory
else:
master_file = HDF5TomoScan.get_master_file(scan_path=directory)
if master_file:
entries = HDF5TomoScan.get_valid_entries(file_path=master_file)
return len(entries) > 0
@docstring(TomoScanBase.is_abort)
def is_abort(self, **kwargs):
# for now there is no abort definition in .hdf5
return False
@docstring(TomoScanBase.to_dict)
def to_dict(self) -> dict:
res = super().to_dict()
res[self.DICT_PATH_KEY] = self.master_file
res[self._DICT_ENTRY_KEY] = self.entry
return res
@staticmethod
def from_dict(_dict: dict):
scan = HDF5TomoScan(scan=None)
scan.load_from_dict(_dict=_dict)
return scan
@docstring(TomoScanBase.load_from_dict)
def load_from_dict(self, _dict: dict) -> TomoScanBase:
"""
:param _dict:
:return:
"""
if isinstance(_dict, io.TextIOWrapper):
data = json.load(_dict)
else:
data = _dict
if not (self.DICT_TYPE_KEY in data and data[self.DICT_TYPE_KEY] == self._TYPE):
raise ValueError("Description is not an HDF5Scan json description")
if HDF5TomoScan._DICT_ENTRY_KEY not in data:
raise ValueError("No hdf5 entry specified")
assert self.DICT_PATH_KEY in data
self._entry = data[self._DICT_ENTRY_KEY]
self.master_file = self.get_master_file(data[self.DICT_PATH_KEY])
if os.path.isdir(data[self.DICT_PATH_KEY]):
self.path = data[self.DICT_PATH_KEY]
else:
self.path = os.path.dirname(data[self.DICT_PATH_KEY])
return self
@property
def entry(self) -> str:
return self._entry
@property
def nexus_version(self):
if self._user_nx_version is not None:
return self._user_nx_version
return self._get_generic_key(
"_nexus_version", self._NEXUS_VERSION_PATH, is_attribute=True
)
@nexus_version.setter
def nexus_version(self, version):
if not isinstance(version, float):
raise TypeError("version expect to be a float")
self._nexus_version = version
@property
def nexus_path(self):
if self._nexus_paths is None:
self._nexus_paths = _get_nexus_paths(self.nexus_version)
return self._nexus_paths
@property
@docstring(TomoScanBase.source)
def source(self):
if self._source is None:
self._source = Source(
name=self.source_name,
type=self.source_type,
)
return self._source
@property
def source_name(self):
return self._get_generic_key("_source_name", self.nexus_path.SOURCE_NAME)
@property
def source_type(self):
return self._get_generic_key("_source_type", self.nexus_path.SOURCE_TYPE)
@property
@docstring(TomoScanBase.instrument_name)
def instrument_name(self) -> typing.Optional[str]:
"""
:return: instrument name
"""
return self._get_generic_key(
"_instrument_name", self.nexus_path.INSTRUMENT_NAME
)
@property
def sequence_name(self):
"""Return the sequence name"""
return self._get_generic_key("_name", self.nexus_path.NAME_PATH)
@property
@docstring(TomoScanBase.projections)
def sample_name(self):
return self._get_generic_key("_sample_name", self.nexus_path.SAMPLE_NAME_PATH)
@property
@docstring(TomoScanBase.projections)
def group_size(self):
return self._get_generic_key("_grp_size", self.nexus_path.GRP_SIZE_ATTR)
@property
@docstring(TomoScanBase.projections)
def projections(self) -> typing.Optional[dict]:
if self._projections is None:
if self.frames:
ignored_projs = []
if self.ignore_projections is not None:
ignored_projs = self.ignore_projections
proj_frames = tuple(
filter(
lambda x: (
x.image_key is ImageKey.PROJECTION
and x.index not in ignored_projs
and x.is_control is False
),
self.frames,
)
)
self._projections = {}
for proj_frame in proj_frames:
self._projections[proj_frame.index] = proj_frame.url
return self._projections
@projections.setter
def projections(self, projections: dict):
self._projections = projections
def get_projections_intensity_monitor(self) -> dict:
"""return intensity monitor values for projections"""
if self.frames:
ignored_projs = []
if self.ignore_projections is not None:
ignored_projs = self.ignore_projections
proj_frames = tuple(
filter(
lambda x: (
x.image_key is ImageKey.PROJECTION
and x.index not in ignored_projs
and x.is_control is False
),
self.frames,
)
)
intensity_monitor = {}
for proj_frame in proj_frames:
intensity_monitor[proj_frame.index] = proj_frame.intensity_monitor
return intensity_monitor
else:
return {}
@property
@docstring(TomoScanBase.alignment_projections)
def alignment_projections(self) -> typing.Optional[dict]:
if self._alignment_projections is None:
if self.frames:
proj_frames = tuple(
filter(
lambda x: x.image_key == ImageKey.PROJECTION
and x.is_control is True,
self.frames,
)
)
self._alignment_projections = {}
for proj_frame in proj_frames:
self._alignment_projections[proj_frame.index] = proj_frame.url
return self._alignment_projections
@property
@docstring(TomoScanBase.darks)
def darks(self) -> typing.Optional[dict]:
if self._darks is None:
if self.frames:
dark_frames = tuple(
filter(lambda x: x.image_key is ImageKey.DARK_FIELD, self.frames)
)
self._darks = {}
for dark_frame in dark_frames:
self._darks[dark_frame.index] = dark_frame.url
return self._darks
@property
@docstring(TomoScanBase.flats)
def flats(self) -> typing.Optional[dict]:
if self._flats is None:
if self.frames:
flat_frames = tuple(
filter(lambda x: x.image_key is ImageKey.FLAT_FIELD, self.frames)
)
self._flats = {}
for flat_frame in flat_frames:
self._flats[flat_frame.index] = flat_frame.url
return self._flats
@docstring(TomoScanBase.update)
def update(self) -> None:
"""update list of radio and reconstruction by parsing the scan folder"""
if self.master_file is None or not os.path.exists(self.master_file):
return
self.projections = self._get_projections_url()
# TODO: update darks and flats too
@docstring(TomoScanBase.get_proj_angle_url)
def _get_projections_url(self):
if self.master_file is None or not os.path.exists(self.master_file):
return
frames = self.frames
if frames is not None:
urls = {}
for frame in frames:
if frame.image_key is ImageKey.PROJECTION:
urls[frame.index] = frame.url
return urls
else:
return None
@docstring(TomoScanBase.tomo_n)
@property
def tomo_n(self) -> typing.Optional[int]:
"""we are making two asumptions for computing tomo_n:
- if a rotation = scan_range +/- EPSILON this is a return projection
- The delta between each projections is constant
"""
return self._get_generic_key("_tomo_n", self.nexus_path.TOMO_N_SCAN)
@docstring(TomoScanBase.tomo_n)
@property
def magnification(self):
return self._get_generic_key(
"_magnification",
"/".join(
[
self.nexus_path.INSTRUMENT_PATH,
self.nexus_path.nx_instrument_paths.DETECTOR_PATH,
self.nexus_path.nx_detector_paths.MAGNIFICATION,
]
),
)
@property
def return_projs(self) -> typing.Optional[list]:
""" """
frames = self.frames
if frames:
return_frames = list(filter(lambda x: x.is_control is True, frames))
return return_frames
else:
return None
@property
def rotation_angle(self) -> typing.Optional[tuple]:
cast_to_float = lambda values: [float(val) for val in values]
return self._get_generic_key(
"_rotation_angles",
self.nexus_path.ROTATION_ANGLE_PATH,
apply_function=cast_to_float,
)
@property
def x_translation(self) -> typing.Optional[tuple]:
cast_to_float = lambda values: [float(val) for val in values]
return self._get_generic_key(
"_x_translations",
self.nexus_path.X_TRANS_PATH,
apply_function=cast_to_float,
unit=MetricSystem.METER,
)
@property
def y_translation(self) -> typing.Optional[tuple]:
cast_to_float = lambda values: [float(val) for val in values]
return self._get_generic_key(
"_y_translations",
self.nexus_path.Y_TRANS_PATH,
apply_function=cast_to_float,
unit=MetricSystem.METER,
)
@property
def z_translation(self) -> typing.Optional[tuple]:
cast_to_float = lambda values: [float(val) for val in values]
return self._get_generic_key(
"_z_translations",
self.nexus_path.Z_TRANS_PATH,
apply_function=cast_to_float,
unit=MetricSystem.METER,
)
@property
def image_key(self) -> typing.Optional[list]:
return self._get_generic_key("_image_keys", self.nexus_path.IMG_KEY_PATH)
@property
def image_key_control(self) -> typing.Optional[list]:
return self._get_generic_key(
"_image_keys_control", self.nexus_path.IMG_KEY_CONTROL_PATH
)
@property
def count_time(self) -> typing.Optional[list]:
return self._get_generic_key(
"_count_time",
self.nexus_path.EXPOSURE_TIME_PATH,
unit=timesystem.TimeSystem.SECOND,
)
@property
@deprecated(replacement="count_time", since_version="1.0.0")
def exposure_time(self) -> typing.Optional[list]:
return self.count_time
@property
def electric_current(self) -> dict:
return self._get_generic_key(
"_electric_current",
self.nexus_path.ELECTRIC_CURRENT_PATH,
unit=electriccurrentsystem.ElectricCurrentSystem.AMPERE,
)
@property
def x_flipped(self) -> bool:
return self._get_generic_key(
"_x_flipped",
"/".join(
[
self.nexus_path.INSTRUMENT_PATH,
self.nexus_path.nx_instrument_paths.DETECTOR_PATH,
self.nexus_path.nx_detector_paths.X_FLIPPED,
]
),
)
@property
def y_flipped(self) -> bool:
return self._get_generic_key(
"_y_flipped",
"/".join(
[
self.nexus_path.INSTRUMENT_PATH,
self.nexus_path.nx_instrument_paths.DETECTOR_PATH,
self.nexus_path.nx_detector_paths.Y_FLIPPED,
]
),
)
@docstring(TomoScanBase)
def get_bounding_box(self, axis: typing.Union[str, int] = None) -> tuple:
"""
Return the bounding box covered by the scan (only take into account the projections).
axis is expected to be in (0, 1, 2) or (x==0, y==1, z==2)
:note: current pixel size is given with magnification. To move back to sample space (x_translation, y_translation, z_translation)
we need to `unmagnified` this is size
"""
if axis is None:
x_bb = self.get_bounding_box(axis="x")
y_bb = self.get_bounding_box(axis="y")
z_bb = self.get_bounding_box(axis="z")
return BoundingBox3D(
(z_bb.min, y_bb.min, x_bb.min),
(z_bb.max, y_bb.max, x_bb.max),
)
if axis == 0:
axis = "z"
elif axis == 1:
axis = "y"
elif axis == 2:
axis = "x"
if axis not in ("x", "y", "z"):
raise ValueError(
f"Axis is expected to be in ('x', 'y', 'z', 0, 1, 2). Got {axis}."
)
if axis == "x":
translations = self.x_translation
default_pixel_size = self.x_pixel_size
n_pixel = self.dim_1
elif axis == "y":
translations = self.y_translation
default_pixel_size = self.y_pixel_size
n_pixel = self.dim_2
elif axis == "z":
translations = self.z_translation
default_pixel_size = self.y_pixel_size
n_pixel = self.dim_2
else:
raise ValueError(
f"Axis is expected to be in ('x', 'y', 'z', 0, 1, 2). Got {axis}."
)
if translations is None or len(translations) == 0:
raise ValueError(f"Unable to find translation for axis {axis}")
translations = numpy.asarray(translations)
# TODO: might need to filter only the projection one ?
filetered_translation_for_proj = translations[
self.image_key_control == ImageKey.PROJECTION.value
]
min_axis_translation = filetered_translation_for_proj.min()
max_axis_translation = filetered_translation_for_proj.max()
if default_pixel_size is None:
raise ValueError(f"Unable to find pixel size for axis {axis}")
if n_pixel is None:
raise ValueError(f"Unable to find number of pixel for axis {axis}")
min_pos_in_meter = min_axis_translation - (n_pixel / 2.0 * default_pixel_size)
max_pos_in_meter = max_axis_translation + (n_pixel / 2.0 * default_pixel_size)
return BoundingBox1D(min_pos_in_meter, max_pos_in_meter)
def _get_generic_key(
self,
key_name,
path_key_name,
unit: typing.Optional[Unit] = None,
apply_function=None,
is_attribute=False,
) -> typing.Any:
if not isinstance(unit, (type(None), Unit)):
raise TypeError(
f"default_unit must be an instance of {Unit} or None. Not {type(unit)}"
)
if getattr(self, key_name, None) is None:
self._check_hdf5scan_validity()
with HDF5File(self.master_file, "r") as h5_file:
if is_attribute and path_key_name in h5_file[self._entry].attrs:
attr_val = h5py_read_dataset(
h5_file[self._entry].attrs[path_key_name]
)
if apply_function is not None:
attr_val = apply_function(attr_val)
elif not is_attribute and path_key_name in h5_file[self._entry]:
if unit is not None:
attr_val = self._get_value(
h5_file[self._entry][path_key_name], default_unit=unit
)
else:
attr_val = h5py_read_dataset(
h5_file[self._entry][path_key_name]
)
if apply_function is not None:
attr_val = apply_function(attr_val)
else:
attr_val = None
setattr(self, key_name, attr_val)
return getattr(self, key_name)
@docstring(TomoScanBase.dark_n)
@property
def dark_n(self) -> typing.Optional[int]:
if self.darks is not None:
return len(self.darks)
else:
return None
@docstring(TomoScanBase.flat_n)
@property
def flat_n(self) -> typing.Optional[int]:
if self.flats is not None:
return len(self.flats)
else:
return None
@docstring(TomoScanBase.ff_interval)
@property
def ff_interval(self):
raise NotImplementedError(
"not implemented for hdf5. But we have " "acquisition sequence instead."
)
@docstring(TomoScanBase.scan_range)
@property
def scan_range(self) -> typing.Optional[int]:
"""For now scan range should return 180 or 360. We don't expect other value."""
if (
self._scan_range is None
and self.master_file
and os.path.exists(self.master_file)
and self._entry is not None
):
rotation_angle = self.rotation_angle
if rotation_angle is not None:
angle_range = numpy.max(rotation_angle) - numpy.min(rotation_angle)
dist_to180 = abs(180 - angle_range)
dist_to360 = abs(360 - angle_range)
if dist_to180 < dist_to360:
self._scan_range = 180
else:
self._scan_range = 360
return self._scan_range
@property
def dim_1(self) -> typing.Optional[int]:
if self._dim_1 is None:
self._get_dim1_dim2()
return self._dim_1
@property
def dim_2(self) -> typing.Optional[int]:
if self._dim_2 is None:
self._get_dim1_dim2()
return self._dim_2
@property
def pixel_size(self) -> typing.Optional[float]:
"""return x pixel size in meter"""
return self.x_pixel_size
@property
def x_pixel_size(self) -> typing.Optional[float]:
"""return x pixel size in meter"""
return self._get_generic_key(
"_x_pixel_size",
self.nexus_path.X_PIXEL_SIZE_PATH,
unit=MetricSystem.METER,
)
@property
def y_pixel_size(self) -> typing.Optional[float]:
"""return y pixel size in meter"""
return self._get_generic_key(
"_y_pixel_size",
self.nexus_path.Y_PIXEL_SIZE_PATH,
unit=MetricSystem.METER,
)
@property
def x_real_pixel_size(self) -> typing.Optional[float]:
return self._get_generic_key(
"_y_pixel_size",
self.nexus_path.X_REAL_PIXEL_SIZE_PATH,
unit=MetricSystem.METER,
)
@property
def y_real_pixel_size(self) -> typing.Optional[float]:
return self._get_generic_key(
"_y_pixel_size",
self.nexus_path.Y_REAL_PIXEL_SIZE_PATH,
unit=MetricSystem.METER,
)
def _get_fov(self):
with HDF5File(self.master_file, "r", swmr=True, libver="latest") as h5_file:
if self.nexus_path.FOV_PATH in h5_file[self._entry]:
fov = h5py_read_dataset(h5_file[self._entry][self.nexus_path.FOV_PATH])
return FOV.from_value(fov)
else:
return None
def _get_dim1_dim2(self):
if self.master_file and os.path.exists(self.master_file):
if self.projections is not None:
if len(self.projections) > 0:
url = list(self.projections.values())[0]
try:
with HDF5File(url.file_path(), mode="r") as h5s:
self._dim_2, self._dim_1 = h5s[url.data_path()].shape[-2:]
except Exception:
self._dim_2, self._dim_1 = get_data(
list(self.projections.values())[0]
).shape
@property
def distance(self) -> typing.Optional[float]:
"""return sample detector distance in meter"""
return self._get_generic_key(
"_distance",
self.nexus_path.DISTANCE_PATH,
unit=MetricSystem.METER,
)
@property
@docstring(TomoScanBase.field_of_view)
def field_of_view(self):
if self._fov is None and self.master_file and os.path.exists(self.master_file):
self._fov = self._get_fov()
return self._fov
@property
@docstring(TomoScanBase.estimated_cor_frm_motor)
def estimated_cor_frm_motor(self):
cast_to_float = lambda x: float(x)
return self._get_generic_key(
"_estimated_cor_frm_motor",
self.nexus_path.ESTIMATED_COR_FRM_MOTOR_PATH,
apply_function=cast_to_float,
)
@property
def energy(self) -> typing.Optional[float]:
"""energy in keV"""
energy_si = self._get_generic_key(
"_energy",
self.nexus_path.ENERGY_PATH,
unit=energysystem.EnergySI.KILOELECTRONVOLT,
)
if energy_si is None:
return None
else:
# has for energy we do an exception we don't use SI but kev
energy_kev = energy_si / energysystem.EnergySI.KILOELECTRONVOLT.value
return energy_kev
@property
def start_time(self):
return self._get_generic_key("_start_time", self.nexus_path.START_TIME_PATH)
@property
def end_time(self):
return self._get_generic_key("_end_time", self.nexus_path.END_TIME_PATH)
@property
def intensity_monitor(self):
return self._get_generic_key(
"_intensity_monitor", self.nexus_path.INTENSITY_MONITOR_PATH
)
@staticmethod
def _is_return_frame(
img_key, lframe, llast_proj_frame, ldelta_angle, return_already_reach
) -> tuple:
"""return is_return, delta_angle"""
if ImageKey.from_value(img_key) is not ImageKey.PROJECTION:
return False, None
if ldelta_angle is None and llast_proj_frame is not None:
delta_angle = lframe.rotation_angle - llast_proj_frame.rotation_angle
return False, delta_angle
elif return_already_reach:
return True, ldelta_angle
else:
current_angle = lframe.rotation_angle - llast_proj_frame.rotation_angle
return abs(current_angle) <= 2 * ldelta_angle, ldelta_angle
@property
def frames(self) -> typing.Optional[tuple]:
"""return tuple of frames. Frames contains"""
if self._frames is None:
image_keys = self.image_key
rotation_angles = self.rotation_angle
x_translation = self.x_translation
if x_translation is None and image_keys is not None:
x_translation = [None] * len(image_keys)
y_translation = self.y_translation
if y_translation is None and image_keys is not None:
y_translation = [None] * len(image_keys)
z_translation = self.z_translation
if z_translation is None and image_keys is not None:
z_translation = [None] * len(image_keys)
intensity_monitor = self.intensity_monitor
if intensity_monitor is None and image_keys is not None:
intensity_monitor = [None] * len(image_keys)
if image_keys is not None and len(image_keys) != len(rotation_angles):
raise ValueError(
"`rotation_angle` and `image_key` have "
"incoherent size (%s vs %s). Unable to "
"deduce frame properties" % (len(rotation_angles), len(image_keys))
)
self._frames = []
delta_angle = None
last_proj_frame = None
return_already_reach = False
if image_keys is None:
# in the case there is no frame / image keys registered at all
return self._frames
for i_frame, rot_a, img_key, x_tr, y_tr, z_tr, i_m in zip(
range(len(rotation_angles)),
rotation_angles,
image_keys,
x_translation,
y_translation,
z_translation,
intensity_monitor,
):
url = DataUrl(
file_path=self.master_file,
data_slice=(i_frame),
data_path=self.get_detector_data_path(),
scheme="silx",
)
frame = TomoFrame(
index=i_frame,
url=url,
image_key=img_key,
rotation_angle=rot_a,
x_translation=x_tr,
y_translation=y_tr,
z_translation=z_tr,
intensity_monitor=i_m,
)
if self.image_key_control is not None:
try:
is_control_frame = (
ImageKey.from_value(
int(self.image_key_control[frame.index])
)
is ImageKey.ALIGNMENT
)
except Exception:
_logger.warning(
f"Unable to deduce if {frame.index} is a control frame. Consider it is not"
)
is_control_frame = False
else:
return_already_reach, delta_angle = self._is_return_frame(
img_key=img_key,
lframe=frame,
llast_proj_frame=last_proj_frame,
ldelta_angle=delta_angle,
return_already_reach=return_already_reach,
)
is_control_frame = return_already_reach
frame.is_control = is_control_frame
self._frames.append(frame)
last_proj_frame = frame
self._frames = tuple(self._frames)
return self._frames
@docstring(TomoScanBase.get_proj_angle_url)
def get_proj_angle_url(self) -> typing.Optional[dict]:
if self.frames is not None:
res = {}
for frame in self.frames:
if frame.image_key is ImageKey.PROJECTION:
if frame.is_control is False:
res[frame.rotation_angle] = frame.url
else:
res[str(frame.rotation_angle) + "(1)"] = frame.url
return res
else:
return None
def _get_sinogram_ref_imp(self, line, subsampling=1):
"""call the reference implementation of get_sinogram.
Used for unit test and insure the result is the same as get_sinogram
"""
return TomoScanBase.get_sinogram(self, line=line, subsampling=subsampling)
@docstring(TomoScanBase)
def get_sinogram(
self,
line,
subsampling=1,
norm_method: typing.Optional[str] = None,
**kwargs,
) -> numpy.array:
if (
len(self.projections) is not None
and self.dim_2 is not None
and line > self.dim_2
) or line < 0:
raise ValueError("requested line {} is not in the scan".format(line))
if not isinstance(subsampling, int):
raise TypeError("subsampling expected to be an int")
if subsampling <= 0:
raise ValueError("subsampling expected to be higher than 1")
if self.projections is not None:
# get the z line
with HDF5File(self.master_file, mode="r") as h5f:
raw_sinogram = h5f[self.get_detector_data_path()][:, line, :]
assert raw_sinogram.ndim == 2
ignored_projs = []
if self.ignore_projections is not None:
ignored_projs = self.ignore_projections
def is_pure_projection(frame: TomoFrame):
return (
frame.image_key is ImageKey.PROJECTION
and not frame.is_control
and frame.index not in ignored_projs
)
is_projection_array = numpy.array(
[is_pure_projection(frame) for frame in self.frames]
)
# TODO: simplify & reduce with filter or map ?
proj_indexes = []
for x, y in zip(self.frames, is_projection_array):
if bool(y) is True:
proj_indexes.append(x.index)
raw_sinogram = raw_sinogram[is_projection_array, :]
assert len(raw_sinogram) == len(
proj_indexes
), "expect to get project indexes of the sinogram"
assert raw_sinogram.ndim == 2, "sinogram is expected to be 2D"
# now apply flat field correction on each line
res = []
for z_frame_raw_sino, proj_index in zip(raw_sinogram, proj_indexes):
assert z_frame_raw_sino.ndim == 1
line_corrected = self.flat_field_correction(
projs=(z_frame_raw_sino,),
proj_indexes=[
proj_index,
],
line=line,
)[0]
assert isinstance(line_corrected, numpy.ndarray)
assert line_corrected.ndim == 1
res.append(line_corrected)
sinogram = numpy.array(res)
assert sinogram.ndim == 2
# apply subsampling (could be speed up but not sure this is useful
# compare to complexity that we would need to had
return self._apply_sino_norm(
sinogram[::subsampling].copy(),
line=line,
norm_method=norm_method,
**kwargs,
)
else:
return None
def get_detector_data_path(self) -> str:
return self.entry + "/instrument/detector/data"
@property
def projections_compacted(self):
"""
Return a compacted view of projection frames.
:return: Dictionary where the key is a list of indices, and the value
is the corresponding `silx.io.url.DataUrl` with merged data_slice
:rtype: dict
"""
if self._projections_compacted is None:
self._projections_compacted = get_compacted_dataslices(self.projections)
return self._projections_compacted
def __str__(self):
return "hdf5 scan(master_file: %s, entry: %s)" % (
os.sep.join(os.path.abspath(self.master_file).split(os.sep)[-3:]),
self.entry,
)
@staticmethod
def _get_value(node: h5py.Group, default_unit: Unit):
"""convert the value contained in the node to the adapted unit.
Unit can be defined in on of the group attributes. It it is the case
will pick this unit, otherwise will use the default unit
"""
if not isinstance(default_unit, Unit):
raise TypeError(
f"default_unit must be an instance of {Unit}. Not {type(default_unit)}"
)
value = h5py_read_dataset(node)
if "unit" in node.attrs:
unit = node.attrs["unit"]
elif "units" in node.attrs:
unit = node.attrs["units"]
else:
unit = default_unit
# handle Diamond dataset where unit is stored as bytes
if hasattr(unit, "decode"):
unit = unit.decode()
return value * default_unit.from_value(unit).value
def _check_hdf5scan_validity(self):
if self.master_file is None:
raise ValueError("No master file provided")
if self.entry is None:
raise ValueError("No entry provided")
with HDF5File(self.master_file, "r") as h5_file:
if self._entry not in h5_file:
raise ValueError(
"Given entry %s is not in the master "
"file %s" % (self._entry, self.master_file)
)
def get_flat_expected_location(self):
return DataUrl(
file_path=self.master_file,
data_path=_get_nexus_paths(self.nexus_version).PROJ_PATH,
).path()
def get_dark_expected_location(self):
return DataUrl(
file_path=self.master_file,
data_path=_get_nexus_paths(self.nexus_version).PROJ_PATH,
).path()
def get_projection_expected_location(self):
return DataUrl(
file_path=self.master_file,
data_path=_get_nexus_paths(self.nexus_version).PROJ_PATH,
).path()
def get_energy_expected_location(self):
return DataUrl(
file_path=self.master_file,
data_path=_get_nexus_paths(self.nexus_version).ENERGY_PATH,
).path()
def get_distance_expected_location(self):
return DataUrl(
file_path=self.master_file,
data_path=_get_nexus_paths(self.nexus_version).ENERGY_PATH,
).path()
def get_pixel_size_expected_location(self):
return DataUrl(
file_path=self.master_file,
data_path=_get_nexus_paths(self.nexus_version).X_PIXEL_SIZE_PATH,
).path()
@docstring(TomoScanBase.get_relative_file)
def get_relative_file(
self, file_name: str, with_dataset_prefix=True
) -> typing.Optional[str]:
if self.path is not None:
if with_dataset_prefix:
basename = self.get_dataset_basename()
basename = "_".join((basename, file_name))
return os.path.join(self.path, basename)
else:
return os.path.join(self.path, file_name)
else:
return None
def get_dataset_basename(self) -> str:
basename, _ = os.path.splitext(self.master_file)
return os.path.basename(basename)
@docstring(TomoScanBase)
def save_reduced_darks(
self,
darks: dict,
output_urls: tuple = REDUCED_DARKS_DATAURLS,
darks_infos=None,
metadata_output_urls=REDUCED_DARKS_METADATAURLS,
):
"""
Dump computed dark (median / mean...) into files
"""
super().save_reduced_darks(
darks=darks,
output_urls=output_urls,
darks_infos=darks_infos,
metadata_output_urls=metadata_output_urls,
)
@docstring(TomoScanBase)
def load_reduced_darks(
self,
inputs_urls: tuple = REDUCED_DARKS_DATAURLS,
metadata_input_urls=REDUCED_DARKS_METADATAURLS,
return_as_url: bool = False,
return_info: bool = False,
) -> dict:
"""
load computed dark (median / mean...) into files
"""
return super().load_reduced_darks(
inputs_urls=inputs_urls,
metadata_input_urls=metadata_input_urls,
return_as_url=return_as_url,
return_info=return_info,
)
@docstring(TomoScanBase)
def save_reduced_flats(
self,
flats: dict,
output_urls: tuple = REDUCED_FLATS_DATAURLS,
flats_infos=None,
metadata_output_urls: tuple = REDUCED_FLATS_METADATAURLS,
) -> dict:
"""
Dump computed flats (median / mean...) into files
"""
super().save_reduced_flats(
flats=flats,
metadata_output_urls=metadata_output_urls,
output_urls=output_urls,
flats_infos=flats_infos,
)
@docstring(TomoScanBase)
def load_reduced_flats(
self,
inputs_urls: tuple = REDUCED_FLATS_DATAURLS,
metadata_input_urls=REDUCED_FLATS_METADATAURLS,
return_as_url: bool = False,
return_info: bool = False,
) -> dict:
"""
load computed dark (median / mean...) into files
"""
return super().load_reduced_flats(
inputs_urls=inputs_urls,
metadata_input_urls=metadata_input_urls,
return_as_url=return_as_url,
return_info=return_info,
)
@docstring(TomoScanBase.compute_reduced_flats)
def compute_reduced_flats(
self,
reduced_method="median",
overwrite=True,
output_dtype=numpy.float32,
return_info: bool = False,
):
return super().compute_reduced_flats(
reduced_method=reduced_method,
overwrite=overwrite,
output_dtype=output_dtype,
return_info=return_info,
)
@docstring(TomoScanBase.compute_reduced_flats)
def compute_reduced_darks(
self,
reduced_method="mean",
overwrite=True,
output_dtype=numpy.float32,
return_info: bool = False,
):
return super().compute_reduced_darks(
reduced_method=reduced_method,
overwrite=overwrite,
output_dtype=output_dtype,
return_info=return_info,
)
@staticmethod
@docstring(TomoScanBase)
def from_identifier(identifier):
"""Return the Dataset from a identifier"""
if not isinstance(identifier, HDF5TomoScanIdentifier):
raise TypeError(
f"identifier should be an instance of {HDF5TomoScanIdentifier}"
)
return HDF5TomoScan(scan=identifier.file_path, entry=identifier.data_path)
@docstring(TomoScanBase)
def get_identifier(self) -> ScanIdentifier:
return HDF5TomoScanIdentifier(
object=self, hdf5_file=self.master_file, entry=self.entry
)
class HDF5XRD3DScan(HDF5TomoScan):
"""
Class used to read nexus file representing a 3D-XRD acquisition.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._rocking = None
self._base_tilt = None
@property
def rocking(self) -> typing.Optional[tuple]:
if self._rocking is None:
self._check_hdf5scan_validity()
with HDF5File(self.master_file, "r") as h5_file:
_rocking = h5py_read_dataset(
h5_file[self._entry][self.nexus_path.ROCKING_PATH]
)
# cast in float
self._rocking = tuple([float(r) for r in _rocking])
return self._rocking
@property
def base_tilt(self) -> typing.Optional[tuple]:
if self._base_tilt is None:
self._check_hdf5scan_validity()
with HDF5File(self.master_file, "r") as h5_file:
_base_tilt = h5py_read_dataset(
h5_file[self._entry][self.nexus_path.BASE_TILT_PATH]
)
# cast in float
self._base_tilt = tuple([float(bt) for bt in _base_tilt])
return self._base_tilt
@property
def frames(self) -> typing.Optional[tuple]:
"""return tuple of frames. Frames contains"""
if self._frames is None:
image_keys = self.image_key
rotation_angles = self.rotation_angle
x_translation = self.x_translation
if x_translation is None and image_keys is not None:
x_translation = [None] * len(image_keys)
y_translation = self.y_translation
if y_translation is None and image_keys is not None:
y_translation = [None] * len(image_keys)
z_translation = self.z_translation
if z_translation is None and image_keys is not None:
z_translation = [None] * len(image_keys)
rocking = self.rocking
if rocking is None and image_keys is not None:
rocking = [None] * len(image_keys)
base_tilt = self.base_tilt
if base_tilt is None and image_keys is not None:
base_tilt = [None] * len(image_keys)
if image_keys is not None and len(image_keys) != len(rotation_angles):
raise ValueError(
"`rotation_angle` and `image_key` have "
"incoherent size (%s vs %s). Unable to "
"deduce frame properties" % (len(rotation_angles), len(image_keys))
)
self._frames = []
delta_angle = None
last_proj_frame = None
return_already_reach = False
for i_frame, rot_a, img_key, x_tr, y_tr, z_tr, rck, bt in zip(
range(len(rotation_angles)),
rotation_angles,
image_keys,
x_translation,
y_translation,
z_translation,
rocking,
base_tilt,
):
url = DataUrl(
file_path=self.master_file,
data_slice=(i_frame),
data_path=self.get_detector_data_path(),
scheme="silx",
)
frame = XRD3DFrame(
index=i_frame,
url=url,
image_key=img_key,
rotation_angle=rot_a,
x_translation=x_tr,
y_translation=y_tr,
z_translation=z_tr,
rocking=rck,
base_tilt=bt,
)
if self.image_key_control is not None:
try:
is_control_frame = (
ImageKey.from_value(
int(
self.image_key_control[ # pylint: disable=E1136 I don't know why this error is raised. I guess he think it can be None ?
frame.index
]
)
)
is ImageKey.ALIGNMENT
)
except Exception:
_logger.warning(
f"Unable to deduce if {frame.index} is a control frame. Consider it is not"
)
is_control_frame = False
else:
return_already_reach, delta_angle = self._is_return_frame(
img_key=img_key,
lframe=frame,
llast_proj_frame=last_proj_frame,
ldelta_angle=delta_angle,
return_already_reach=return_already_reach,
)
is_control_frame = return_already_reach
frame._is_control_frame = is_control_frame
self._frames.append(frame)
last_proj_frame = frame
self._frames = tuple(self._frames)
return self._frames
class TomoFrame:
"""class to store all metadata information of a NXTomo frame"""
def __init__(
self,
index: int,
url: typing.Optional[DataUrl] = None,
image_key: typing.Union[None, ImageKey, int] = None,
rotation_angle: typing.Optional[float] = None,
is_control_proj: bool = False,
x_translation: typing.Optional[float] = None,
y_translation: typing.Optional[float] = None,
z_translation: typing.Optional[float] = None,
intensity_monitor: typing.Optional[float] = None,
):
assert type(index) is int
self._index = index
if image_key is not None:
self._image_key = ImageKey.from_value(image_key)
else:
self._image_key = None
self._rotation_angle = rotation_angle
self._url = url
self._is_control_frame = is_control_proj
self._data = None
self._x_translation = x_translation
self._y_translation = y_translation
self._z_translation = z_translation
self._intensity_monitor = intensity_monitor
@property
def index(self) -> int:
return self._index
@property
def image_key(self) -> ImageKey:
return self._image_key
@image_key.setter
def image_key(self, image_key: ImageKey) -> None:
if not isinstance(image_key, ImageKey):
raise TypeError(f"{image_key} is expected to be an instance of {ImageKey}")
self._image_key = image_key
@property
def rotation_angle(self) -> float:
return self._rotation_angle
@rotation_angle.setter
def rotation_angle(self, angle: float) -> None:
self._rotation_angle = angle
@property
def url(self) -> DataUrl:
return self._url
@property
def is_control(self) -> bool:
return self._is_control_frame
@property
def x_translation(self):
return self._x_translation
@property
def y_translation(self):
return self._y_translation
@property
def z_translation(self):
return self._z_translation
@property
def intensity_monitor(self):
return self._intensity_monitor
@is_control.setter
def is_control(self, is_return: bool):
self._is_control_frame = is_return
def __str__(self):
return (
"Frame {index},: image_key: {image_key},"
"is_control: {is_control},"
"rotation_angle: {rotation_angle},"
"x_translation: {x_translation},"
"y_translation: {y_translation},"
"z_translation: {z_translation},"
"url: {url}".format(
index=self.index,
image_key=self.image_key,
is_control=self.is_control,
rotation_angle=self.rotation_angle,
url=self.url.path(),
x_translation=self.x_translation,
y_translation=self.y_translation,
z_translation=self.z_translation,
)
)
class XRD3DFrame(TomoFrame):
"""class to store all metadata information of a 3d-xrd nexus frame"""
def __init__(
self,
index: int,
url: typing.Optional[DataUrl] = None,
image_key: typing.Union[ImageKey, int] = None,
rotation_angle: typing.Optional[float] = None,
is_control_proj: bool = False,
x_translation: typing.Optional[float] = None,
y_translation: typing.Optional[float] = None,
z_translation: typing.Optional[float] = None,
rocking: typing.Optional[float] = None,
base_tilt: typing.Optional[float] = None,
):
super().__init__(
index=index,
url=url,
image_key=image_key,
rotation_angle=rotation_angle,
is_control_proj=is_control_proj,
x_translation=x_translation,
y_translation=y_translation,
z_translation=z_translation,
)
self._rocking = rocking
self._base_tilt = base_tilt
@property
def rocking(self) -> typing.Optional[float]:
return self._rocking
@property
def base_tilt(self) -> typing.Optional[float]:
return self._base_tilt
def __str__(self):
p_str = super(XRD3DFrame, self).__str__()
p_str += "rocking: {rocking}," "base-tilt: {base_tilt}".format(
rocking=self.rocking, base_tilt=self.base_tilt
)
return p_str
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/scan/mock.py 0000644 0236253 0006511 00000110652 00000000000 020511 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
"""
Utils to mock scans
"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "30/09/2019"
import h5py
import numpy
import os
from xml.etree import cElementTree
import fabio
import fabio.edfimage
from .hdf5scan import ImageKey, HDF5TomoScan
from tomoscan.esrf.volume.hdf5volume import HDF5Volume
from silx.io.utils import h5py_read_dataset
from .utils import dump_info_file
import logging
_logger = logging.getLogger(__name__)
class ScanMock:
"""Base class to mock as scan (radios, darks, flats, reconstructions...)"""
PIXEL_SIZE = 0.457
def __init__(
self,
scan_path,
n_radio,
n_ini_radio=None,
n_extra_radio=0,
scan_range=360,
n_recons=0,
n_pag_recons=0,
recons_vol=False,
dim=200,
ref_n=0,
flat_n=0,
dark_n=0,
scene="noise",
):
"""
:param scan_path:
:param n_radio:
:param n_ini_radio:
:param n_extra_radio:
:param scan_range:
:param n_recons:
:param n_pag_recons:
:param recons_vol:
:param dim:
:param ref_n: repalced by flat_n
:param flat_n:
:param dark_n:
:param str scene: scene type.
* 'noise': generate radios from numpy.random
* `increase value`: first frame value will be0, then second 1...
* `arange`: arange through frames
* 'perfect-sphere: generate a sphere which just fit in the
detector dimensions
TODO: add some differente scene type.
"""
self.det_width = dim
self.det_height = dim
self.scan_path = scan_path
self.n_radio = n_radio
self.scene = scene
os.makedirs(scan_path, exist_ok=True)
if ref_n != 0:
# TODO: add a deprecation warning
_logger.warning("ref_n is deprecated. Please use flat_n instead")
if flat_n != 0:
raise ValueError(
"You provide ref_n and flat_n. Please only provide flat_n"
)
flat_n = ref_n
self.write_metadata(
n_radio=n_radio, scan_range=scan_range, flat_n=flat_n, dark_n=dark_n
)
def add_radio(self, index=None):
raise NotImplementedError("Base class")
def add_reconstruction(self, index=None):
raise NotImplementedError("Base class")
def add_pag_reconstruction(self, index=None):
raise NotImplementedError("Base class")
def add_recons_vol(self):
raise NotImplementedError("Base class")
def write_metadata(self, n_radio, scan_range, flat_n, dark_n):
raise NotImplementedError("Base class")
def end_acquisition(self):
raise NotImplementedError("Base class")
def _get_radio_data(self, index):
if self.scene == "noise":
return numpy.random.random((self.det_height * self.det_width)).reshape(
(self.det_width, self.det_height)
)
elif self.scene == "increasing value":
return numpy.zeros((self.det_width, self.det_height), dtype="f") + index
elif self.scene == "arange":
start = index * (self.det_height * self.det_width)
stop = (index + 1) * (self.det_height * self.det_width)
return numpy.arange(start=start, stop=stop).reshape(
self.det_width, self.det_height
)
elif self.scene == "perfect-sphere":
background = numpy.zeros((self.det_height * self.det_width))
radius = min(background.shape)
def _compute_radius_to_center(data):
assert data.ndim == 2
xcenter = (data.shape[2]) // 2
ycenter = (data.shape[1]) // 2
y, x = numpy.ogrid[: data.shape[0], : data.shape[1]]
r = numpy.sqrt((x - xcenter) ** 2 + (y - ycenter) ** 2)
return r
radii = _compute_radius_to_center(background)
scale = 1
background[radii < radius * scale] = 1.0
return background
else:
raise ValueError("selected scene %s is no managed" % self.scene)
class MockHDF5(ScanMock):
"""
Mock an acquisition in a hdf5 file.
note: for now the Mock class only manage one initial flat and one final
"""
_PROJ_COUNT = 1
def __init__(
self,
scan_path,
n_proj,
n_ini_proj=None,
n_alignement_proj=0,
scan_range=360,
n_recons=0,
n_pag_recons=0,
recons_vol=False,
dim=200,
create_ini_dark=True,
create_ini_ref=True,
create_final_ref=False,
create_ini_flat=True,
create_final_flat=False,
n_refs=10,
n_flats=10,
scene="noise",
intensity_monitor=False,
distance=None,
energy=None,
sample_name="test",
group_size=None,
magnification=None,
x_pos=None,
y_pos=None,
z_pos=None,
field_of_view="Full",
estimated_cor_frm_motor=None,
):
"""
:param scan_path: directory of the file containing the hdf5 acquisition
:param n_proj: number of projections (does not contain alignement proj)
:param n_ini_proj: number of projection do add in the constructor
:param n_alignement_proj: number of alignment projection
:param int scan_range:
:param n_recons:
:param n_pag_recons:
:param recons_vol:
:param dim: frame dim - only manage square fame for now
:param create_ini_dark: create one initial dark frame on construction
:param create_ini_flat: create the initial serie of ref (n_ref) on
construction (after creation of the dark)
:param create_final_flat: create the final serie of ref (n_ref) on
construction (after creation of the dark)
:param n_refs: number of refs per serie
:param distance: if not None then will save energy on the dataset
:param energy: if not None then will save the distance on the dataset
"""
if create_ini_ref is False:
_logger.warning("create_ini_ref is deprecated. Please use create_init_flat")
create_ini_flat = create_ini_ref
if create_final_ref is True:
_logger.warning(
"create_final_ref is deprecated. Please use create_init_flat"
)
create_final_flat = create_final_ref
if n_refs != 10:
_logger.warning("n_refs is deprecated, please use n_flats")
n_flats = n_refs
self.rotation_angle = numpy.linspace(start=0, stop=scan_range, num=n_proj + 1)
self.rotation_angle_return = numpy.linspace(
start=scan_range, stop=0, num=n_alignement_proj
)
self.scan_master_file = os.path.join(
scan_path, os.path.basename((scan_path)) + ".h5"
)
self._intensity_monitor = intensity_monitor
self._n_flats = n_flats
self.scan_entry = "entry"
self._sample_name = sample_name
self._group_size = group_size
self._x_pos = x_pos
self._y_pos = y_pos
self._z_pos = z_pos
self._magnification = magnification
super(MockHDF5, self).__init__(
scan_path=scan_path,
n_radio=n_proj,
n_ini_radio=n_ini_proj,
n_extra_radio=n_alignement_proj,
scan_range=scan_range,
n_recons=n_recons,
n_pag_recons=n_pag_recons,
recons_vol=recons_vol,
dim=dim,
scene=scene,
)
if create_ini_dark:
self.add_initial_dark()
if create_ini_flat:
self.add_initial_flat()
if n_ini_proj is not None:
for i_radio in range(n_ini_proj):
self.add_radio(index=i_radio)
if create_final_flat:
self.add_final_flat()
if energy is not None:
self.add_energy(energy)
if distance is not None:
self.add_distance(distance)
self._define_fov(field_of_view, estimated_cor_frm_motor)
self.scan = HDF5TomoScan(scan=self.scan_master_file, entry="entry")
@property
def has_intensity_monitor(self):
return self._intensity_monitor
def add_initial_dark(self):
dark = (
numpy.random.random((self.det_height * self.det_width))
.reshape((1, self.det_width, self.det_height))
.astype("f")
)
if self.has_intensity_monitor:
diode_data = numpy.random.random() * 100
else:
diode_data = None
self._append_frame(
data_=dark,
rotation_angle=self.rotation_angle[-1],
image_key=ImageKey.DARK_FIELD.value,
image_key_control=ImageKey.DARK_FIELD.value,
diode_data=diode_data,
x_pos=self._x_pos,
y_pos=self._y_pos,
z_pos=self._z_pos,
)
def add_initial_flat(self):
for i in range(self._n_flats):
flat = (
numpy.random.random((self.det_height * self.det_width))
.reshape((1, self.det_width, self.det_height))
.astype("f")
)
if self.has_intensity_monitor:
diode_data = numpy.random.random() * 100
else:
diode_data = None
self._append_frame(
data_=flat,
rotation_angle=self.rotation_angle[0],
image_key=ImageKey.FLAT_FIELD.value,
image_key_control=ImageKey.FLAT_FIELD.value,
diode_data=diode_data,
x_pos=self._x_pos,
y_pos=self._y_pos,
z_pos=self._z_pos,
)
def add_final_flat(self):
for i in range(self._n_flats):
flat = (
numpy.random.random((self.det_height * self.det_width))
.reshape((1, self.det_width, self.det_height))
.astype("f")
)
if self.has_intensity_monitor:
diode_data = numpy.random.random() * 100
else:
diode_data = None
self._append_frame(
data_=flat,
rotation_angle=self.rotation_angle[-1],
image_key=ImageKey.FLAT_FIELD.value,
image_key_control=ImageKey.FLAT_FIELD.value,
diode_data=diode_data,
x_pos=self._x_pos,
y_pos=self._y_pos,
z_pos=self._z_pos,
)
def add_radio(self, index=None):
radio = self._get_radio_data(index=index)
radio = radio.reshape((1, self.det_height, self.det_width))
if self.has_intensity_monitor:
diode_data = numpy.random.random() * 100
else:
diode_data = None
self._append_frame(
data_=radio,
rotation_angle=self.rotation_angle[index],
image_key=ImageKey.PROJECTION.value,
image_key_control=ImageKey.PROJECTION.value,
diode_data=diode_data,
x_pos=self._x_pos,
y_pos=self._y_pos,
z_pos=self._z_pos,
)
def add_alignment_radio(self, index, angle):
radio = self._get_radio_data(index=index)
radio = radio.reshape((1, self.det_height, self.det_width))
if self.has_intensity_monitor is not None:
diode_data = numpy.random.random() * 100
else:
diode_data = None
self._append_frame(
data_=radio,
rotation_angle=angle,
image_key=ImageKey.PROJECTION.value,
image_key_control=ImageKey.ALIGNMENT.value,
diode_data=diode_data,
x_pos=self._x_pos,
y_pos=self._y_pos,
z_pos=self._z_pos,
)
def _append_frame(
self,
data_,
rotation_angle,
image_key,
image_key_control,
diode_data=None,
x_pos=None,
y_pos=None,
z_pos=None,
):
with h5py.File(self.scan_master_file, "a") as h5_file:
entry_one = h5_file.require_group(self.scan_entry)
instrument_grp = entry_one.require_group("instrument")
detector_grp = instrument_grp.require_group("detector")
sample_grp = entry_one.require_group("sample")
# add data
if "data" in detector_grp:
# read and remove data
current_dataset = h5py_read_dataset(detector_grp["data"])
new_dataset = numpy.append(current_dataset, data_)
del detector_grp["data"]
shape = list(current_dataset.shape)
shape[0] += 1
new_dataset = new_dataset.reshape(shape)
else:
new_dataset = data_
# add diode / intensity monitor data
if diode_data is not None:
diode_grp = entry_one.require_group("instrument/diode")
if "data" in diode_grp:
new_diode = h5py_read_dataset(diode_grp["data"])
new_diode = numpy.append(new_diode, diode_data)
del diode_grp["data"]
else:
new_diode = diode_data
# add x position
if x_pos is not None:
sample_grp = entry_one.require_group("sample")
if "x_translation" in sample_grp:
new_x_trans = h5py_read_dataset(sample_grp["x_translation"])
new_x_trans = numpy.append(new_x_trans, x_pos)
del sample_grp["x_translation"]
else:
new_x_trans = [
x_pos,
]
# add y position
if y_pos is not None:
sample_grp = entry_one.require_group("sample")
if "y_translation" in sample_grp:
new_y_trans = h5py_read_dataset(sample_grp["y_translation"])
new_y_trans = numpy.append(new_y_trans, y_pos)
del sample_grp["y_translation"]
else:
new_y_trans = [
y_pos,
]
# add z position
if z_pos is not None:
sample_grp = entry_one.require_group("sample")
if "z_translation" in sample_grp:
new_z_trans = h5py_read_dataset(sample_grp["z_translation"])
new_z_trans = numpy.append(new_z_trans, z_pos)
del sample_grp["z_translation"]
else:
new_z_trans = [
z_pos,
]
# add rotation angle
if "rotation_angle" in sample_grp:
new_rot_angle = h5py_read_dataset(sample_grp["rotation_angle"])
new_rot_angle = numpy.append(new_rot_angle, rotation_angle)
del sample_grp["rotation_angle"]
else:
new_rot_angle = [
rotation_angle,
]
# add image_key
if "image_key" in detector_grp:
new_image_key = h5py_read_dataset(detector_grp["image_key"])
new_image_key = numpy.append(new_image_key, image_key)
del detector_grp["image_key"]
else:
new_image_key = [
image_key,
]
# add image_key_control
if "image_key_control" in detector_grp:
new_image_key_control = h5py_read_dataset(
detector_grp["image_key_control"]
)
new_image_key_control = numpy.append(
new_image_key_control, image_key_control
)
del detector_grp["image_key_control"]
else:
new_image_key_control = [
image_key_control,
]
# add count_time
if "count_time" in detector_grp:
new_count_time = h5py_read_dataset(detector_grp["count_time"])
new_count_time = numpy.append(new_count_time, self._PROJ_COUNT)
del detector_grp["count_time"]
else:
new_count_time = [
self._PROJ_COUNT,
]
with h5py.File(self.scan_master_file, "a") as h5_file:
entry_one = h5_file.require_group(self.scan_entry)
instrument_grp = entry_one.require_group("instrument")
if "NX_class" not in instrument_grp.attrs:
instrument_grp.attrs["NX_class"] = "NXinstrument"
detector_grp = instrument_grp.require_group("detector")
if "NX_class" not in detector_grp.attrs:
detector_grp.attrs["NX_class"] = "NXdetector"
sample_grp = entry_one.require_group("sample")
if "NX_class" not in sample_grp.attrs:
sample_grp.attrs["NX_class"] = "NXsample"
# write camera information
detector_grp["data"] = new_dataset
detector_grp["image_key"] = new_image_key
detector_grp["image_key_control"] = new_image_key_control
detector_grp["count_time"] = new_count_time
# write sample information
sample_grp["rotation_angle"] = new_rot_angle
if x_pos is not None:
sample_grp["x_translation"] = new_x_trans
if y_pos is not None:
sample_grp["y_translation"] = new_y_trans
if z_pos is not None:
sample_grp["z_translation"] = new_z_trans
if self._intensity_monitor:
diode_grp = entry_one.require_group("instrument/diode")
if "NX_class" not in diode_grp.attrs:
diode_grp.attrs["NX_class"] = "NXdetector"
diode_grp["data"] = new_diode
def write_metadata(self, n_radio, scan_range, flat_n, dark_n):
with h5py.File(self.scan_master_file, "a") as h5_file:
entry_one = h5_file.require_group(self.scan_entry)
instrument_grp = entry_one.require_group("instrument")
detector_grp = instrument_grp.require_group("detector")
entry_one.require_group("sample")
entry_one.attrs["NX_class"] = "NXentry"
entry_one.attrs["definition"] = "NXtomo"
if "size" not in detector_grp:
detector_grp["size"] = (self.det_width, self.det_height)
if "x_pixel_size" not in detector_grp:
detector_grp["x_pixel_size"] = ScanMock.PIXEL_SIZE
if "y_pixel_size" not in detector_grp:
detector_grp["y_pixel_size"] = ScanMock.PIXEL_SIZE
if "magnification" not in detector_grp and self._magnification is not None:
detector_grp["magnification"] = self._magnification
sample_grp = entry_one.require_group("sample")
if "name" not in sample_grp:
sample_grp["name"] = self._sample_name
if self._group_size is not None and "group_size" not in entry_one:
entry_one["group_size"] = self._group_size
def end_acquisition(self):
# no specific operation to do
pass
def _define_fov(self, acquisition_fov, estimated_cor_from_motor):
with h5py.File(self.scan_master_file, "a") as h5_file:
entry_one = h5_file.require_group(self.scan_entry)
instrument_grp = entry_one.require_group("instrument")
detector_grp = instrument_grp.require_group("detector")
if "field_of_view" not in detector_grp:
detector_grp["field_of_view"] = acquisition_fov
if estimated_cor_from_motor is not None:
detector_grp["estimated_cor_from_motor"] = estimated_cor_from_motor
def add_energy(self, energy):
with h5py.File(self.scan_master_file, "a") as h5_file:
beam_grp = h5_file[self.scan_entry].require_group("beam")
if "incident_energy" in beam_grp:
del beam_grp["incident_energy"]
beam_grp["incident_energy"] = energy
beam_grp_2 = h5_file[self.scan_entry].require_group("instrument/beam")
if "incident_energy" in beam_grp_2:
del beam_grp_2["incident_energy"]
beam_grp_2["incident_energy"] = energy
def add_distance(self, distance):
with h5py.File(self.scan_master_file, "a") as h5_file:
detector_grp = h5_file[self.scan_entry].require_group("instrument/detector")
if "distance" in detector_grp:
del detector_grp["distance"]
detector_grp["distance"] = distance
class MockEDF(ScanMock):
"""Mock a EDF acquisition"""
_RECONS_PATTERN = "_slice_"
_PAG_RECONS_PATTERN = "_slice_pag_"
_DISTANCE = 0.25
_ENERGY = 19.0
def __init__(
self,
scan_path,
n_radio,
n_ini_radio=None,
n_extra_radio=0,
scan_range=360,
n_recons=0,
n_pag_recons=0,
recons_vol=False,
dim=200,
scene="noise",
dark_n=0,
ref_n=0,
flat_n=0,
rotation_angle_endpoint=False,
energy=None,
pixel_size=None,
distance=None,
srcurrent_start=200.0,
srcurrent_end=100.0,
):
self._last_radio_index = -1
self._energy = energy if energy is not None else self._ENERGY
self._pixel_size = pixel_size if pixel_size is not None else self.PIXEL_SIZE
self._distance = distance if distance is not None else self._DISTANCE
super(MockEDF, self).__init__(
scan_path=scan_path,
n_radio=n_radio,
n_ini_radio=n_ini_radio,
n_extra_radio=n_extra_radio,
scan_range=scan_range,
n_recons=n_recons,
n_pag_recons=n_pag_recons,
recons_vol=recons_vol,
dim=dim,
scene=scene,
dark_n=dark_n,
ref_n=ref_n,
flat_n=flat_n,
)
self._proj_rotation_angles = numpy.linspace(
min(scan_range, 0),
max(scan_range, 0),
n_radio,
endpoint=rotation_angle_endpoint,
)
self._srcurrent = numpy.linspace(
srcurrent_start, srcurrent_end, num=n_radio, endpoint=True
)
if n_ini_radio:
for i_radio in range(n_ini_radio):
self.add_radio(i_radio)
for i_extra_radio in range(n_extra_radio):
self.add_radio(i_extra_radio + n_ini_radio)
for i_dark in range(dark_n):
self.add_dark(i_dark)
for i_flat in range(flat_n):
self.add_flat(i_flat)
for i_recons in range(n_recons):
self.add_reconstruction(i_recons)
for i_recons in range(n_pag_recons):
self.add_pag_reconstruction(i_recons)
if recons_vol is True:
self.add_recons_vol()
@property
def energy(self) -> float:
return self._energy
@property
def pixel_size(self) -> float:
return self._pixel_size
@property
def distance(self) -> float:
return self._distance
def get_info_file(self):
return os.path.join(self.scan_path, os.path.basename(self.scan_path) + ".info")
def end_acquisition(self):
# create xml file
xml_file = os.path.join(
self.scan_path, os.path.basename(self.scan_path) + ".xml"
)
if not os.path.exists(xml_file):
# write the final xml file
root = cElementTree.Element("root")
tree = cElementTree.ElementTree(root)
tree.write(xml_file)
def write_metadata(self, n_radio, scan_range, flat_n, dark_n):
info_file = self.get_info_file()
if not os.path.exists(info_file):
dump_info_file(
file_path=info_file,
tomo_n=n_radio,
scan_range=scan_range,
flat_n=flat_n,
flat_on=flat_n,
dark_n=dark_n,
dim_1=self.det_width,
dim_2=self.det_height,
col_beg=0,
col_end=self.det_width,
row_beg=0,
row_end=self.det_height,
pixel_size=self.pixel_size,
distance=self.distance,
energy=self.energy,
)
def add_radio(self, index=None):
if index is not None:
self._last_radio_index = index
index_ = index
else:
self._last_radio_index += 1
index_ = self._last_radio_index
file_name = (
os.path.basename(self.scan_path) + "_{0:04d}".format(index_) + ".edf"
)
f = os.path.join(self.scan_path, file_name)
if not os.path.exists(f):
if index_ < len(self._proj_rotation_angles):
rotation_angle = self._proj_rotation_angles[index_]
else:
rotation_angle = 0.0
if index_ < len(self._srcurrent):
srcurrent = self._srcurrent[index_]
else:
srcurrent = self._srcurrent[-1]
data = self._get_radio_data(index=index_)
assert data is not None
assert data.shape == (self.det_width, self.det_height)
edf_writer = fabio.edfimage.EdfImage(
data=data,
header={
"motor_pos": f"{rotation_angle} 0.0 1.0 2.0;",
"motor_mne": "srot sx sy sz;",
"counter_pos": f"{srcurrent};",
"counter_mne": "srcur;",
},
)
edf_writer.write(f)
def add_dark(self, index):
file_name = "darkend{0:04d}.edf".format(index)
file_path = os.path.join(self.scan_path, file_name)
if not os.path.exists(file_path):
data = numpy.random.random((self.det_height * self.det_width)).reshape(
(self.det_width, self.det_height)
)
edf_writer = fabio.edfimage.EdfImage(
data=data,
header={
"motor_pos": f"{index} 0.0 1.0 2.0;",
"motor_mne": "srot sx sy sz;",
"counter_pos": f"{self._srcurrent[0]};",
"counter_mne": "srcur;",
},
)
edf_writer.write(file_path)
def add_flat(self, index):
file_name = "refHST{0:04d}.edf".format(index)
file_path = os.path.join(self.scan_path, file_name)
if not os.path.exists(file_path):
data = numpy.random.random((self.det_height * self.det_width)).reshape(
(self.det_width, self.det_height)
)
edf_writer = fabio.edfimage.EdfImage(
data=data,
header={
"motor_pos": f"{index} 0.0 1.0 2.0",
"motor_mne": "srot sx sy sz",
"counter_pos": f"{self._srcurrent[0]};",
"counter_mne": "srcur;",
},
)
edf_writer.write(file_path)
@staticmethod
def mockReconstruction(folder, nRecons=5, nPagRecons=0):
"""
create reconstruction files into the given folder
:param str folder: the path of the folder where to save the reconstruction
:param nRecons: the number of reconstruction to mock
:param nPagRecons: the number of paganin reconstruction to mock
:param volFile: true if we want to add a volFile with reconstruction
"""
assert type(nRecons) is int and nRecons >= 0
basename = os.path.basename(folder)
dim = 200
for i in range(nRecons):
vol_file = os.path.join(
folder, basename + MockEDF._RECONS_PATTERN + str(i).zfill(4) + ".hdf5"
)
data = numpy.zeros((1, dim, dim))
data[:: i + 2, :: i + 2] = 1.0
volume = HDF5Volume(
file_path=vol_file,
data_path="entry",
data=data,
overwrite=True,
)
volume.save()
for i in range(nPagRecons):
vol_file = os.path.join(
folder,
basename + MockEDF._PAG_RECONS_PATTERN + str(i).zfill(4) + ".hdf5",
)
data = numpy.zeros((1, dim, dim))
data[:: i + 2, :: i + 2] = 1.0
volume = HDF5Volume(
file_path=vol_file,
data_path="entry",
data=data,
)
volume.save()
@staticmethod
def _createVolInfoFile(
filePath,
shape,
voxelSize=1,
valMin=0.0,
valMax=1.0,
s1=0.0,
s2=1.0,
S1=0.0,
S2=1.0,
):
assert len(shape) == 3
f = open(filePath, "w")
f.writelines(
"\n".join(
[
"! PyHST_SLAVE VOLUME INFO FILE",
"NUM_X = %s" % shape[2],
"NUM_Y = %s" % shape[1],
"NUM_Z = %s" % shape[0],
"voxelSize = %s" % voxelSize,
"BYTEORDER = LOWBYTEFIRST",
"ValMin = %s" % valMin,
"ValMax = %s" % valMax,
"s1 = %s" % s1,
"s2 = %s" % s2,
"S1 = %s" % S1,
"S2 = %s" % S2,
]
)
)
f.close()
@staticmethod
def fastMockAcquisition(folder, n_radio=20, n_extra_radio=0, scan_range=360):
"""
Simple function creating an acquisition into the given directory
This won't complete data, scan.info of scan.xml files but just create the
structure that data watcher is able to detect in edf mode.
"""
assert type(n_radio) is int and n_radio > 0
basename = os.path.basename(folder)
dim = 200
os.makedirs(folder, exist_ok=True)
# create info file
info_file = os.path.join(folder, basename + ".info")
if not os.path.exists(info_file):
# write the info file
with open(info_file, "w") as info_file:
info_file.write("TOMO_N= " + str(n_radio) + "\n")
info_file.write("ScanRange= " + str(scan_range) + "\n")
# create scan files
for i in range((n_radio + n_extra_radio)):
file_name = basename + "_{0:04d}".format(i) + ".edf"
f = os.path.join(folder, file_name)
if not os.path.exists(f):
data = numpy.random.random(dim * dim).reshape(dim, dim)
edf_writer = fabio.edfimage.EdfImage(data=data, header={"tata": "toto"})
edf_writer.write(f)
# create xml file
xml_file = os.path.join(folder, basename + ".xml")
if not os.path.exists(xml_file):
# write the final xml file
root = cElementTree.Element("root")
tree = cElementTree.ElementTree(root)
tree.write(xml_file)
@staticmethod
def mockScan(
scanID,
nRadio=5,
nRecons=1,
nPagRecons=0,
dim=10,
scan_range=360,
n_extra_radio=0,
start_dark=False,
end_dark=False,
start_flat=False,
end_flat=False,
start_dark_data=None,
end_dark_data=None,
start_flat_data=None,
end_flat_data=None,
):
"""
Create some random radios and reconstruction in the folder
:param str scanID: the folder where to save the radios and scans
:param int nRadio: The number of radios to create
:param int nRecons: the number of reconstruction to mock
:param int nRecons: the number of paganin reconstruction to mock
:param int dim: dimension of the files (nb row/columns)
:param int scan_range: scan range, usually 180 or 360
:param int n_extra_radio: number of radio run after the full range is made
usually used to observe any sample movement
during acquisition
:param bool start_dark: do we want to create dark serie at start
:param bool end_dark: do we want to create dark serie at end
:param bool start_flat: do we want to create flat serie at start
:param bool end_flat: do we want to create flat serie at end
:param start_dark_data: if start_dark set to True Optional value for the dark serie. Else will generate some random values
:param end_dark_data: if end_dark set to True Optional value for the dark serie. Else will generate some random values
:param start_flat_data: if start_flat set to True Optional value for the flat serie. Else will generate some random values
:param end_flat_data: if end_flat set to True Optional value for the flat serie. Else will generate some random values
"""
assert type(scanID) is str
assert type(nRadio) is int
assert type(nRecons) is int
assert type(dim) is int
from tomoscan.factory import Factory # avoid cyclic import
MockEDF.fastMockAcquisition(
folder=scanID,
n_radio=nRadio,
scan_range=scan_range,
n_extra_radio=n_extra_radio,
)
MockEDF.mockReconstruction(
folder=scanID, nRecons=nRecons, nPagRecons=nPagRecons
)
if start_dark:
MockEDF.add_dark_serie(
scan_path=scanID, n_elmt=4, index=0, dim=dim, data=start_dark_data
)
if start_flat:
MockEDF.add_flat_serie(
scan_path=scanID, n_elmt=4, index=0, dim=dim, data=start_flat_data
)
if end_dark:
MockEDF.add_dark_serie(
scan_path=scanID,
n_elmt=4,
index=nRadio - 1,
dim=dim,
data=end_dark_data,
)
if end_flat:
MockEDF.add_flat_serie(
scan_path=scanID,
n_elmt=4,
index=nRadio - 1,
dim=dim,
data=end_flat_data,
)
return Factory.create_scan_object(scanID)
@staticmethod
def add_flat_serie(scan_path, n_elmt, index, dim, data):
ref_file = os.path.join(scan_path, "ref0000_{}.edf".format(str(index).zfill(4)))
if data is None:
data = numpy.array(
numpy.random.random(n_elmt * dim * dim) * 100, numpy.uint32
)
data.shape = (n_elmt, dim, dim)
edf_writer = fabio.edfimage.EdfImage(data=data[0], header={"tata": "toto"})
for frame in data[1:]:
edf_writer.append_frame(data=frame)
edf_writer.write(ref_file)
@staticmethod
def add_dark_serie(scan_path, n_elmt, index, dim, data):
dark_file = os.path.join(scan_path, "darkend{}.edf".format(str(index).zfill(4)))
if data is None:
data = numpy.array(
numpy.random.random(n_elmt * dim * dim) * 100, numpy.uint32
)
data.shape = (n_elmt, dim, dim)
edf_writer = fabio.edfimage.EdfImage(data=data[0], header={"tata": "toto"})
for frame in data[1:]:
edf_writer.append_frame(data=frame)
edf_writer.write(dark_file)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/scan/utils.py 0000644 0236253 0006511 00000054271 00000000000 020724 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
__authors__ = ["H.Payno"]
__license__ = "MIT"
__date__ = "10/10/2019"
import os
import fabio
from silx.io.url import DataUrl
from silx.utils.deprecation import deprecated
from silx.io.utils import h5py_read_dataset, get_data as silx_get_data
from silx.io.dictdump import h5todict
from typing import Union
from typing import Iterable
import numpy
import logging
import sys
from tomoscan.io import HDF5File
import warnings
import contextlib
import fnmatch
import h5py
from tomoscan.scanbase import ReducedFramesInfos, TomoScanBase
_logger = logging.getLogger(__name__)
def get_parameters_frm_par_or_info(file_: str) -> dict:
"""
Create a dictionary from the file with the information name as keys and
their values as values
:param file_: path to the file to parse
:type file_: str
:raises: ValueError when fail to parse some line.
"""
assert os.path.exists(file_) and os.path.isfile(file_)
ddict = {}
f = open(file_, "r")
lines = f.readlines()
for line in lines:
if "=" not in line:
continue
line_ = line.replace(" ", "")
line_ = line_.rstrip("\n")
# remove on the line comments
if "#" in line_:
line_ = line_.split("#")[0]
if line_ == "":
continue
try:
key, value = line_.split("=")
except ValueError:
raise ValueError('fail to extract information from "%s"' % line_)
else:
# try to cast the value on int, else float else don't
try:
value = int(value)
except Exception:
try:
value = float(value)
except Exception:
pass
ddict[key.lower()] = value
return ddict
def extract_urls_from_edf(
file_: str, start_index: Union[None, int], n_frames: Union[int, None] = None
) -> dict:
"""
return one DataUrl for each frame contain in the file_
:param file_: path to the file to parse
:type file_: str
:param n_frames: Number of frames in each edf file (inferred if not told)
:type n_frames: Union[int, None]
:param start_index:
:type start_index: Union[None,start_index]
"""
res = {}
index = 0 if start_index is None else start_index
if n_frames is None:
with fabio.open(file_) as fabio_file:
n_frames = fabio_file.nframes
for i_frame in range(n_frames):
res[index] = DataUrl(
scheme="fabio",
file_path=file_,
data_slice=[
i_frame,
],
)
index += 1
return res
def get_compacted_dataslices(
urls: dict,
max_grp_size=None,
return_merged_indices=False,
return_url_set=False,
subsampling=1,
):
"""
Regroup urls to get the data more efficiently.
Build a structure mapping files indices to information on
how to load the data: `{indices_set: data_location}`
where `data_location` contains contiguous indices.
:param dict urls: Dictionary where the key is an integer and the value is
a silx `DataUrl`.
:param max_grp_size: maximum size of url grps
:type max_grp_size: None or int
:param bool return_merged_indices: if True return the last merged indices.
Deprecated
:param bool return_url_set: return a set with url containing `urls` slices
and data path
:return: Dictionary where the key is a list of indices, and the value is
the corresponding `silx.io.url.DataUrl` with merged data_slice
:rtype: dict
"""
def _convert_to_slice(idx):
if numpy.isscalar(idx):
return slice(idx, idx + 1)
# otherwise, assume already slice object
return idx
def is_contiguous_slice(slice1, slice2):
if numpy.isscalar(slice1):
slice1 = slice(slice1, slice1 + 1)
if numpy.isscalar(slice2):
slice2 = slice(slice2, slice2 + 1)
return slice2.start == slice1.stop
def merge_slices(slice1, slice2):
return slice(slice1.start, slice2.stop)
if return_merged_indices is True:
warnings.warn(
"return_merged_indices is deprecated. It will be removed in version 0.8"
)
if max_grp_size is None:
max_grp_size = sys.maxsize
if subsampling is None:
subsampling = 1
sorted_files_indices = sorted(urls.keys())
idx0 = sorted_files_indices[0]
first_url = urls[idx0]
merged_indices = [[idx0]]
data_location = [
[
first_url.file_path(),
first_url.data_path(),
_convert_to_slice(first_url.data_slice()),
first_url.scheme(),
]
]
pos = 0
grp_size = 0
curr_fp, curr_dp, curr_slice, curr_scheme = data_location[pos]
for idx in sorted_files_indices[1:]:
url = urls[idx]
next_slice = _convert_to_slice(url.data_slice())
if (
(grp_size <= max_grp_size)
and (url.file_path() == curr_fp)
and (url.data_path() == curr_dp)
and is_contiguous_slice(curr_slice, next_slice)
and (url.scheme() == curr_scheme)
):
merged_indices[pos].append(idx)
merged_slices = merge_slices(curr_slice, next_slice)
data_location[pos][-2] = merged_slices
curr_slice = merged_slices
grp_size += 1
else: # "jump"
pos += 1
merged_indices.append([idx])
data_location.append(
[
url.file_path(),
url.data_path(),
_convert_to_slice(url.data_slice()),
url.scheme(),
]
)
curr_fp, curr_dp, curr_slice, curr_scheme = data_location[pos]
grp_size = 0
# Format result
res = {}
for ind, dl in zip(merged_indices, data_location):
res.update(
dict.fromkeys(
ind,
DataUrl(
file_path=dl[0], data_path=dl[1], data_slice=dl[2], scheme=dl[3]
),
)
)
# Subsample
if subsampling > 1:
next_pos = 0
for idx in sorted_files_indices:
url = res[idx]
ds = url.data_slice()
res[idx] = DataUrl(
file_path=url.file_path(),
data_path=url.data_path(),
data_slice=slice(next_pos + ds.start, ds.stop, subsampling),
)
n_imgs = ds.stop - (ds.start + next_pos)
next_pos = abs(-n_imgs % subsampling)
if return_url_set:
url_set = {}
for _, url in res.items():
path = url.file_path(), url.data_path(), str(url.data_slice())
url_set[path] = url
if return_merged_indices:
return res, merge_slices, url_set
else:
return res, url_set
if return_merged_indices:
return res, merged_slices
else:
return res
@deprecated(
replacement="tomoscan.serie.from_sequences_to_series", since_version="0.8.0"
)
def from_sequences_to_grps(scans: Iterable) -> tuple:
from tomoscan.serie import sequences_to_series_from_sample_name
return sequences_to_series_from_sample_name(scans)
@deprecated(replacement="tomoscan.serie.check_serie_is_valid", since_version="0.8.0")
def check_grp_is_valid(scans: Iterable):
from tomoscan.serie import check_serie_is_consistant_frm_sample_name
return check_serie_is_consistant_frm_sample_name(scans)
@deprecated(replacement="tomoscan.serie.serie_is_complete", since_version="0.8.0")
def grp_is_complete(scans: Iterable) -> bool:
from tomoscan.serie import serie_is_complete_from_group_size
return serie_is_complete_from_group_size(scans)
def dataset_has_broken_vds(url: DataUrl):
"""
check that the provided url is not a VDS with broken links.
"""
if not isinstance(url, DataUrl):
raise TypeError(f"{url} is expected to be an instance of {DataUrl}")
with HDF5File(url.file_path(), mode="r") as h5f:
dataset = h5f[url.data_path()]
if not dataset.is_virtual:
return False
else:
for file_ in get_unique_files_linked(url=url):
if not os.path.exists(file_):
_logger.warning(f"{file_} does not exists")
return True
return False
def get_datasets_linked_to_vds(url: DataUrl):
"""
Return set([file-path, data_path]) linked to the provided url
"""
if not isinstance(url, DataUrl):
raise TypeError(f"{url} is expected to be an instance of {DataUrl}")
start_file_path = url.file_path()
start_dataset_path = url.data_path()
start_dataset_slice = url.data_slice()
if isinstance(start_dataset_slice, slice):
start_dataset_slice = tuple(
range(
start_dataset_slice.start,
start_dataset_slice.stop,
start_dataset_slice.step or 1,
)
)
virtual_dataset_to_treat = set()
final_dataset = set()
already_checked = set()
# first datasets to be tested
virtual_dataset_to_treat.add(
(start_file_path, start_dataset_path, start_dataset_slice),
)
while len(virtual_dataset_to_treat) > 0:
to_treat = list(virtual_dataset_to_treat)
virtual_dataset_to_treat.clear()
for file_path, dataset_path, dataset_slice in to_treat:
if (file_path, dataset_path, dataset_slice) in already_checked:
continue
if os.path.exists(file_path):
with HDF5File(file_path, mode="r") as h5f:
dataset = h5f[dataset_path]
if dataset.is_virtual:
for vs_info in dataset.virtual_sources():
min_frame_bound = vs_info.vspace.get_select_bounds()[0][0]
max_frame_bound = vs_info.vspace.get_select_bounds()[1][0]
if isinstance(dataset_slice, int):
if (
not min_frame_bound
<= dataset_slice
<= max_frame_bound
):
continue
elif isinstance(dataset_slice, tuple):
if (
min_frame_bound > dataset_slice[-1]
or max_frame_bound < dataset_slice[0]
):
continue
with cwd_context():
os.chdir(os.path.dirname(file_path))
# Fixme: For now will look at the entire dataset of the n +1 file.
# if those can also contains virtual dataset and we want to handle
# the case a part of it is broken but not ours this should handle
# hyperslab
virtual_dataset_to_treat.add(
(
os.path.realpath(vs_info.file_name)
if vs_info.file_name != "."
else os.path.abspath(url.file_path()),
# avoid calling os.path.realpath if the dataset is in the same file. Otherwise mess up with paths
vs_info.dset_name,
None,
)
)
else:
final_dataset.add((file_path, dataset_path, dataset_slice))
else:
final_dataset.add((file_path, dataset_path, dataset_slice))
already_checked.add((file_path, dataset_path, dataset_slice))
return final_dataset
def get_unique_files_linked(url: DataUrl):
"""
Return the list of unique files linked to the DataUrl without depth limitation
"""
unique_files = set()
datasets_linked = get_datasets_linked_to_vds(url=url)
[unique_files.add(file_path) for (file_path, _, _) in datasets_linked]
return unique_files
def get_files_from_pattern(file_pattern: str, pattern: str, research_dir: str) -> dict:
"""
return: all files using a {pattern} to store the index. Key is the index and value is the file name
:rtype: dict
"""
files_frm_patterm = {}
if ("{" + pattern + "}") not in file_pattern:
return files_frm_patterm
if not isinstance(file_pattern, str):
raise TypeError(f"file_pattern is expected to be str not {type(file_pattern)}")
if not isinstance(pattern, str):
raise TypeError(f"pattern is expected to be str not {type(pattern)}")
if not isinstance(research_dir, str):
raise TypeError(
f"research_dir is expected to be a str not {type(research_dir)}"
)
if not os.path.exists(research_dir):
raise FileNotFoundError(f"{research_dir} does not exists")
# look for some index_zfill4
file_path_fn = file_pattern.format(**{pattern: "*"})
for file in os.listdir(research_dir):
if fnmatch.fnmatch(file.lower(), file_path_fn.lower()):
# try to deduce the index from pattern
idx_start = file_pattern.find("{" + pattern + "}")
idx_end = len(file_pattern.replace("{" + pattern + "}", "")) - idx_start
idx_as_str = file[idx_start:-idx_end]
if idx_as_str != "": # handle case of an empty string
try:
idx_as_int = int(idx_as_str)
except ValueError:
_logger.warning("Could not determined")
else:
files_frm_patterm[idx_as_int] = file
return files_frm_patterm
def dump_info_file(
file_path,
tomo_n,
scan_range,
flat_n,
flat_on,
dark_n,
dim_1,
dim_2,
col_beg,
col_end,
row_beg,
row_end,
pixel_size,
distance,
energy,
):
# write the info file
with open(file_path, "w") as info_file:
info_file.write("TOMO_N= " + str(tomo_n) + "\n")
info_file.write("ScanRange= " + str(scan_range) + "\n")
info_file.write("REF_N= " + str(flat_n) + "\n")
info_file.write("REF_ON= " + str(flat_on) + "\n")
info_file.write("DARK_N= " + str(dark_n) + "\n")
info_file.write("Dim_1= " + str(dim_1) + "\n")
info_file.write("Dim_2= " + str(dim_2) + "\n")
info_file.write("Col_beg= " + str(col_beg) + "\n")
info_file.write("Col_end= " + str(col_end) + "\n")
info_file.write("Row_beg= " + str(row_beg) + "\n")
info_file.write("Row_end= " + str(row_end) + "\n")
info_file.write("PixelSize= " + str(pixel_size) + "\n")
info_file.write("Distance= " + str(distance) + "\n")
info_file.write("Energy= " + str(energy) + "\n")
@contextlib.contextmanager
def cwd_context(new_cwd=None):
try:
curdir = os.getcwd()
except Exception as e:
_logger.error(e)
curdir = None
try:
if new_cwd is not None and os.path.isfile(new_cwd):
new_cwd = os.path.dirname(new_cwd)
if new_cwd not in (None, ""):
os.chdir(new_cwd)
yield
finally:
if curdir is not None:
os.chdir(curdir)
def get_data(url: DataUrl):
# update the current working dircetory for external dataset
if url.file_path() is not None and h5py.is_hdf5(url.file_path()):
# convert path to real path to insure it will be constant when changing current working directory
file_path = os.path.realpath(url.file_path())
with cwd_context(file_path):
with HDF5File(file_path, mode="r") as h5f:
if url.data_path() in h5f:
if url.data_slice() is None:
return h5py_read_dataset(h5f[url.data_path()])
else:
return h5py_read_dataset(
h5f[url.data_path()], index=url.data_slice()
)
else:
# for other file format don't need to do the same
return silx_get_data(url)
def copy_h5_dict_darks_to(
scan, darks_url: DataUrl, save=False, raise_error_if_url_empty=True
):
"""
:param TomwerScanBase scan: target to copy darks
:param DataUrl darks_url: DataUrl to find darks to be copied
:param bool save: should we save the darks to disk. If not will only be set on scan cache
:param bool raise_error_if_url_empty: if the provided DataUrl lead to now data shoudl we raise an error (like file or dataset missing...)
"""
from tomoscan.scanbase import TomoScanBase # avoid cyclic import
if not isinstance(scan, TomoScanBase):
raise TypeError(
f"scan is expected to be an instance of {TomoScanBase}. {type(scan)} provided"
)
if not isinstance(darks_url, DataUrl):
raise TypeError(
f"darks_url is expected to be an instance of {DataUrl}. {type(darks_url)} provided"
)
if darks_url.scheme() not in (None, "silx", "h5py"):
raise ValueError("handled scheme are 'silx' and 'h5py'")
try:
with cwd_context(darks_url.file_path()):
my_dict = h5todict(
h5file=darks_url.file_path(),
path=darks_url.data_path(),
)
except Exception as e:
if raise_error_if_url_empty:
raise e
else:
return
data, metadata = ReducedFramesInfos.split_data_and_metadata(my_dict)
# handle relative frame position if any
data = from_relative_reduced_frames_to_absolute(reduced_frames=data, scan=scan)
scan.set_reduced_darks(darks=data, darks_infos=metadata)
if save:
scan.save_reduced_darks(darks=data, darks_infos=metadata)
def copy_h5_dict_flats_to(
scan, flats_url: DataUrl, save=False, raise_error_if_url_empty=True
):
"""
:param TomwerScanBase scan: target to copy darks
:param DataUrl darks_url: DataUrl to find darks to be copied
:param bool save: should we save the darks to disk. If not will only be set on scan cache
:param bool raise_error_if_url_empty: if the provided DataUrl lead to now data shoudl we raise an error (like file or dataset missing...)
"""
from tomoscan.scanbase import TomoScanBase # avoid cyclic import
if not isinstance(scan, TomoScanBase):
raise TypeError(
f"scan is expected to be an instance of {TomoScanBase}. {type(scan)} provided"
)
if not isinstance(flats_url, DataUrl):
raise TypeError(
f"flats_url is expected to be an instance of {DataUrl}. {type(flats_url)} provided"
)
if flats_url.scheme() not in (None, "silx", "h5py"):
raise ValueError("handled scheme are 'silx' and 'h5py'")
try:
with cwd_context(flats_url.file_path()):
my_dict = h5todict(
h5file=flats_url.file_path(),
path=flats_url.data_path(),
)
except Exception as e:
if raise_error_if_url_empty:
raise ValueError("DataUrl is not pointing to any data") from e
else:
return
data, metadata = ReducedFramesInfos.split_data_and_metadata(my_dict)
# handle relative frame position if any
data = from_relative_reduced_frames_to_absolute(reduced_frames=data, scan=scan)
scan.set_reduced_flats(flats=data, flats_infos=metadata)
if save:
scan.save_reduced_flats(flats=data, flats_infos=metadata)
def from_relative_reduced_frames_to_absolute(reduced_frames: dict, scan: TomoScanBase):
if not isinstance(reduced_frames, dict):
raise TypeError(
f"reduced_frames is expected to be a dict, {type(reduced_frames)} provided"
)
if not isinstance(scan, TomoScanBase):
raise TypeError(f"scan is expected to be a TomoScanBase, {type(scan)} provided")
frame_n = len(scan.projections) + len(scan.darks) + len(scan.flats)
def convert(index):
if isinstance(index, str) and index.endswith("r"):
return int(float(index[:-1]) * (frame_n - 1))
else:
return index
return {convert(key): value for key, value in reduced_frames.items()}
def from_absolute_reduced_frames_to_relative(reduced_frames: dict, scan: TomoScanBase):
if not isinstance(reduced_frames, dict):
raise TypeError(
f"reduced_frames is expected to be a dict, {type(reduced_frames)} provided"
)
if not isinstance(scan, TomoScanBase):
raise TypeError(f"scan is expected to be a TomoScanBase, {type(scan)} provided")
frame_n = len(scan.projections) + len(scan.darks) + len(scan.flats)
def convert(index):
if isinstance(index, str) and index.endswith("r"):
return index
else:
return f"{int(index) / frame_n}r"
return {convert(key): value for key, value in reduced_frames.items()}
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1660290689.0
tomoscan-1.2.2/tomoscan/esrf/utils.py 0000644 0236253 0006511 00000000401 00000000000 017762 0 ustar 00payno soft 0000000 0000000 from silx.utils.deprecation import deprecated_warning
deprecated_warning(
"Module",
name="tomoscan.esrf.utils",
reason="Have been moved",
replacement="tomoscan.esrf.scan.utils",
only_once=True,
)
from .scan.utils import * # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.165433
tomoscan-1.2.2/tomoscan/esrf/volume/ 0000755 0236253 0006511 00000000000 00000000000 017564 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/volume/__init__.py 0000644 0236253 0006511 00000003145 00000000000 021700 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""This module is dedicated to instances of :class:`VolumeBase` used at esrf"""
from .edfvolume import EDFVolume # noqa F401
from .hdf5volume import HDF5Volume # noqa F401
from .jp2kvolume import JP2KVolume # noqa F401
from .tiffvolume import MultiTIFFVolume, TIFFVolume # noqa F401
from .rawvolume import RawVolume # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/volume/edfvolume.py 0000644 0236253 0006511 00000012367 00000000000 022135 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module defining utils for an edf volume"""
__authors__ = ["H. Payno", "P. Paleo"]
__license__ = "MIT"
__date__ = "27/01/2022"
from typing import Optional
from tomoscan.scanbase import TomoScanBase
from tomoscan.esrf.volume.singleframebase import VolumeSingleFrameBase
from tomoscan.esrf.identifier.edfidentifier import EDFVolumeIdentifier
import numpy
from silx.io.url import DataUrl
from tomoscan.utils import docstring
import fabio
import os
class EDFVolume(VolumeSingleFrameBase):
"""
Save volume data to single frame edf and metadata to .txt files
:warning: each file saved under {volume_basename}_{index_zfill6}.edf is considered to be a slice of the volume.
"""
DEFAULT_DATA_SCHEME = "fabio"
DEFAULT_DATA_EXTENSION = "edf"
def __init__(
self,
folder: Optional[str] = None,
volume_basename: Optional[str] = None,
data: Optional[numpy.ndarray] = None,
source_scan: Optional[TomoScanBase] = None,
metadata: Optional[dict] = None,
data_url: Optional[DataUrl] = None,
metadata_url: Optional[DataUrl] = None,
overwrite: bool = False,
header: Optional[dict] = None,
start_index=0,
data_extension=DEFAULT_DATA_EXTENSION,
metadata_extension=VolumeSingleFrameBase.DEFAULT_METADATA_EXTENSION,
) -> None:
if folder is not None:
url = DataUrl(
file_path=str(folder),
data_path=None,
)
else:
url = None
super().__init__(
volume_basename=volume_basename,
url=url,
data=data,
source_scan=source_scan,
metadata=metadata,
data_url=data_url,
metadata_url=metadata_url,
overwrite=overwrite,
start_index=start_index,
data_extension=data_extension,
metadata_extension=metadata_extension,
)
self._header = header
@property
def header(self) -> Optional[dict]:
"""possible header for the edf files"""
return self._header
@docstring(VolumeSingleFrameBase)
def save_frame(self, frame, file_name, scheme):
if scheme == "fabio":
header = self.header or {}
edf_writer = fabio.edfimage.EdfImage(
data=frame,
header=header,
)
parent_dir = os.path.dirname(file_name)
if parent_dir != "":
os.makedirs(parent_dir, exist_ok=True)
edf_writer.write(file_name)
else:
raise ValueError(f"scheme {scheme} is not handled")
@docstring(VolumeSingleFrameBase)
def load_frame(self, file_name, scheme):
if scheme == "fabio":
return fabio.open(file_name).data
else:
raise ValueError(f"scheme {scheme} is not handled")
@staticmethod
@docstring(VolumeSingleFrameBase)
def from_identifier(identifier):
"""Return the Dataset from a identifier"""
if not isinstance(identifier, EDFVolumeIdentifier):
raise TypeError(
f"identifier should be an instance of {EDFVolumeIdentifier}"
)
return EDFVolume(
folder=identifier.folder,
volume_basename=identifier.file_prefix,
)
@docstring(VolumeSingleFrameBase)
def get_identifier(self) -> EDFVolumeIdentifier:
if self.url is None:
raise ValueError("no file_path provided. Cannot provide an identifier")
return EDFVolumeIdentifier(
object=self, folder=self.url.file_path(), file_prefix=self._volume_basename
)
@staticmethod
def example_defined_from_str_identifier() -> str:
return " ; ".join(
[
f"{EDFVolume(folder='/path/to/my/my_folder').get_identifier().to_str()}",
f"{EDFVolume(folder='/path/to/my/my_folder', volume_basename='mybasename').get_identifier().to_str()} (if mybasename != folder name)",
]
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/volume/hdf5volume.py 0000644 0236253 0006511 00000044743 00000000000 022230 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module defining utils for an hdf5 volume"""
__authors__ = ["H. Payno", "P. Paleo"]
__license__ = "MIT"
__date__ = "27/01/2022"
import os
from typing import Optional
from tomoscan.scanbase import TomoScanBase
from tomoscan.volumebase import VolumeBase
from tomoscan.esrf.identifier.hdf5Identifier import HDF5VolumeIdentifier
from silx.io.url import DataUrl
from tomoscan.utils import docstring
from tomoscan.io import HDF5File
from silx.io.dictdump import dicttonx, nxtodict
from silx.utils.deprecation import deprecated_warning
import numpy
import logging
import h5py
_logger = logging.getLogger(__name__)
class HDF5Volume(VolumeBase):
"""
Volume where both data and metadata are store in a HDF5 file but at a different location.
"""
DATA_DATASET_NAME = "results/data"
METADATA_GROUP_NAME = "configuration"
def __init__(
self,
file_path: Optional[str] = None,
data_path: Optional[str] = None,
data: Optional[numpy.ndarray] = None,
source_scan: Optional[TomoScanBase] = None,
metadata: Optional[dict] = None,
data_url: Optional[DataUrl] = None,
metadata_url: Optional[DataUrl] = None,
overwrite: bool = False,
) -> None:
url = self._get_url_from_file_path_data_path(
file_path=file_path, data_path=data_path
)
self._file_path = file_path
self._data_path = data_path
super().__init__(
url=url,
data=data,
source_scan=source_scan,
metadata=metadata,
data_url=data_url,
metadata_url=metadata_url,
overwrite=overwrite,
)
@property
def data_extension(self):
if self.data_url is not None and self.data_url.file_path() is not None:
return os.path.splitext(self.data_url.file_path())[1]
@property
def metadata_extension(self):
if self.metadata_url is not None and self.metadata_url.file_path() is not None:
return os.path.splitext(self.metadata_url.file_path())[1]
@staticmethod
def _get_url_from_file_path_data_path(
file_path: Optional[str], data_path: Optional[str]
) -> Optional[DataUrl]:
if file_path is not None and data_path is not None:
return DataUrl(file_path=file_path, data_path=data_path, scheme="silx")
else:
return None
@VolumeBase.data.setter
def data(self, data):
if not isinstance(data, (numpy.ndarray, type(None), h5py.VirtualLayout)):
raise TypeError(
f"data is expected to be None or a numpy array not {type(data)}"
)
if isinstance(data, numpy.ndarray) and data.ndim != 3:
raise ValueError(f"data is expected to be 3D and not {data.ndim}D.")
self._data = data
@property
def file_path(self):
return self._file_path
@file_path.setter
def file_path(self, file_path: Optional[str]):
if not (file_path is None or isinstance(file_path, str)):
raise TypeError
self._file_path = file_path
self.url = self._get_url_from_file_path_data_path(
self.file_path, self.data_path
)
@property
def data_path(self):
return self._data_path
@data_path.setter
def data_path(self, data_path: Optional[str]):
if not (data_path is None or isinstance(data_path, str)):
raise TypeError
self._data_path = data_path
self.url = self._get_url_from_file_path_data_path(
self.file_path, self.data_path
)
@docstring(VolumeBase)
def deduce_data_and_metadata_urls(self, url: Optional[DataUrl]) -> tuple:
if url is None:
return None, None
else:
if url.data_slice() is not None:
raise ValueError(f"data_slice is not handled by the {HDF5Volume}")
file_path = url.file_path()
data_path = url.data_path()
if data_path is None:
raise ValueError(
"data_path not provided from the DataUrl. Please provide one."
)
scheme = url.scheme() or "silx"
return (
# data url
DataUrl(
file_path=file_path,
data_path="/".join([data_path, self.DATA_DATASET_NAME]),
scheme=scheme,
),
# medata url
DataUrl(
file_path=file_path,
data_path="/".join([data_path, self.METADATA_GROUP_NAME]),
scheme=scheme,
),
)
@docstring(VolumeBase)
def save_data(self, url: Optional[DataUrl] = None, mode="a", **kwargs) -> None:
"""
:raises KeyError: if data path already exists and overwrite set to False
:raises ValueError: if data is None
"""
# to be discussed. Not sure we should raise an error in this case. Could be usefull but this could also be double edged knife
if self.data is None:
raise ValueError("No data to be saved")
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
else:
_logger.info(f"save data to {url.path()}")
if url.file_path() is not None and os.path.dirname(url.file_path()) != "":
os.makedirs(os.path.dirname(url.file_path()), exist_ok=True)
with HDF5File(filename=url.file_path(), mode=mode) as h5s:
if url.data_path() in h5s:
if self.overwrite:
_logger.debug(
f"overwrite requested. Will remove {url.data_path()} entry"
)
del h5s[url.data_path()]
else:
raise OSError(
f"Unable to save data to {url.data_path()}. This path already exists in {url.file_path()}. If you want you can ask to overwrite it."
)
if isinstance(self.data, h5py.VirtualLayout):
h5s.create_virtual_dataset(name=url.data_path(), layout=self.data)
else:
h5s.create_dataset(url.data_path(), data=self.data, **kwargs)
@docstring(VolumeBase)
def data_file_saver_generator(
self, n_frames, data_url: DataUrl, overwrite: bool, mode: str = "a", **kwargs
):
"""
warning: the file will be open until the generator exists
"""
class _FrameDumper:
"""
will not work for VirtualLayout
"""
Dataset = None
# shared dataset
def __init__(
self,
root_group,
data_path,
create_dataset,
n_frames,
i_frame,
overwrite,
mode,
) -> None:
self.data_path = data_path
self.root_group = root_group
self.create_dataset = create_dataset
self.n_frames = n_frames
self.mode = mode
self.overwrite = overwrite
self.i_frame = i_frame
self.__kwargs = kwargs # keep chunk arguments for example
def __setitem__(self, key, value):
frame = value
if _FrameDumper.Dataset is None:
if self.data_path in self.root_group:
if self.overwrite:
_logger.debug(
f"overwrite requested. Will remove {data_url.data_path()} entry"
)
del h5s[data_url.data_path()]
else:
raise OSError(
f"Unable to save data to {data_url.data_path()}. This path already exists in {data_url.file_path()}. If you want you can ask to overwrite it."
)
_FrameDumper.Dataset = h5s.create_dataset( # pylint: disable=E1137
name=data_url.data_path(),
shape=(n_frames, frame.shape[0], frame.shape[1]),
dtype=frame.dtype,
**self.__kwargs,
)
if key != slice(None, None, None):
raise ValueError("item setting only handle ':' for now")
_FrameDumper.Dataset[i_frame] = frame # pylint: disable=E1137
if (
data_url.file_path() is not None
and os.path.dirname(data_url.file_path()) != ""
):
os.makedirs(os.path.dirname(data_url.file_path()), exist_ok=True)
with HDF5File(filename=data_url.file_path(), mode=mode) as h5s:
for i_frame in range(n_frames):
yield _FrameDumper(
create_dataset=i_frame == 0,
data_path=data_url.data_path(),
root_group=h5s,
n_frames=n_frames,
i_frame=i_frame,
overwrite=overwrite,
mode=mode,
)
@docstring(VolumeBase)
def save_metadata(self, url: Optional[DataUrl] = None) -> None:
"""
:raises KeyError: if data path already exists and overwrite set to False
:raises ValueError: if data is None
"""
if self.metadata is None:
raise ValueError("No metadata to be saved")
url = url or self.metadata_url
if url is None:
raise ValueError(
"Cannot get metadata_url. An url should be provided. Don't know where to save this."
)
else:
_logger.info(f"save metadata to {url.path()}")
if url.file_path() is not None and os.path.dirname(url.file_path()) != "":
os.makedirs(os.path.dirname(url.file_path()), exist_ok=True)
dicttonx(
self.metadata,
h5file=url.file_path(),
h5path=url.data_path(),
update_mode="replace",
mode="a",
)
@docstring(VolumeBase)
def load_data(
self, url: Optional[DataUrl] = None, store: bool = True
) -> numpy.ndarray:
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
with HDF5File(filename=url.file_path(), mode="r") as h5s:
if url.data_path() in h5s:
data = h5s[url.data_path()][()]
else:
raise KeyError(f"Data path {url.data_path()} not found.")
if store:
self.data = data
return data
def get_slice(
self,
index=None,
axis=None,
xy=None,
xz=None,
yz=None,
url: Optional[DataUrl] = None,
):
if xy is yz is xz is None and (index is None or axis is None):
raise ValueError("index and axis should be provided")
if xy is not None:
deprecated_warning(
type_="parameter",
name="xy",
replacement="axis and index",
)
if axis is None and index is None:
axis = 0
index = xy
else:
raise ValueError("several axis (previously xy, xz, yz requested")
elif xz is not None:
deprecated_warning(
type_="parameter",
name="xz",
replacement="axis and index",
)
if axis is None and index is None:
axis = 1
index = xz
else:
raise ValueError("several axis (previously xy, xz, yz requested")
elif yz is not None:
deprecated_warning(
type_="parameter",
name="yz",
replacement="axis and index",
)
if axis is None and index is None:
axis = 2
index = yz
else:
raise ValueError("several axis (previously xy, xz, yz requested")
if self.data is not None:
return self.select(volume=self.data, axis=axis, index=index)
else:
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
with HDF5File(filename=url.file_path(), mode="r") as h5s:
if url.data_path() in h5s:
return self.select(
volume=h5s[url.data_path()], axis=axis, index=index
)
else:
raise KeyError(f"Data path {url.data_path()} not found.")
@docstring(VolumeBase)
def load_metadata(self, url: Optional[DataUrl] = None, store: bool = True) -> dict:
url = url or self.metadata_url
if url is None:
raise ValueError(
"Cannot get metadata_url. An url should be provided. Don't know where to save this."
)
try:
metadata = nxtodict(
h5file=url.file_path(), path=url.data_path(), asarray=False
)
except KeyError:
_logger.warning(f"no metadata found in {url.data_path()}")
metadata = {}
if store:
self.metadata = metadata
return metadata
def browse_metadata_files(self, url=None):
"""
return a generator go through all the existings files associated to the data volume
"""
url = url or self.metadata_url
if url is None:
return
elif url.file_path() is not None and os.path.exists(url.file_path()):
yield url.file_path()
def browse_data_files(self, url=None):
"""
return a generator go through all the existings files associated to the data volume
"""
url = url or self.data_url
if url is None:
return
elif url.file_path() is not None and os.path.exists(url.file_path()):
yield url.file_path()
def browse_data_urls(self, url=None):
url = url or self.data_url
if url is not None and os.path.exists(url.file_path()):
yield url
@staticmethod
@docstring(VolumeBase)
def from_identifier(identifier):
"""Return the Dataset from a identifier"""
if not isinstance(identifier, HDF5VolumeIdentifier):
raise TypeError(
f"identifier should be an instance of {HDF5VolumeIdentifier}"
)
return HDF5Volume(
file_path=identifier.file_path,
data_path=identifier.data_path,
)
@docstring(VolumeBase)
def get_identifier(self) -> HDF5VolumeIdentifier:
if self.url is None:
raise ValueError("no file_path provided. Cannot provide an identifier")
return HDF5VolumeIdentifier(
object=self, hdf5_file=self.url.file_path(), entry=self.url.data_path()
)
@staticmethod
def example_defined_from_str_identifier() -> str:
return (
HDF5Volume(file_path="/path/to/file_path", data_path="entry0000")
.get_identifier()
.to_str()
)
@docstring(VolumeBase)
def browse_slices(self, url=None):
if url is None and self.data is not None:
for slice in self.data:
yield slice
else:
url = url or self.data_url
if url is None:
raise ValueError(
"No data and data_url know and no url provided. Uanble to browse slices"
)
with HDF5File(filename=url.file_path(), mode="r") as h5s:
if url.data_path() in h5s:
for slice in h5s[url.data_path()]:
yield slice
else:
raise KeyError(f"Data path {url.data_path()} not found.")
@docstring(VolumeBase)
def load_chunk(self, chunk, url=None):
url = url or self.data_url
if url is None:
raise ValueError("Cannot get data_url. An url should be provided.")
with HDF5File(filename=url.file_path(), mode="r") as h5s:
if url.data_path() in h5s:
return h5s[url.data_path()][chunk]
else:
raise KeyError(f"Data path {url.data_path()} not found.")
def get_volume_shape(self, url=None):
url = url or self.data_url
if url is None:
raise ValueError("Cannot get data_url. An url should be provided.")
if self.data is not None:
return self.data.shape
else:
with HDF5File(filename=url.file_path(), mode="r") as h5s:
if url.data_path() in h5s:
return h5s[url.data_path()].shape
else:
return None
def get_default_data_path_for_volume(scan: TomoScanBase) -> str:
if not isinstance(scan, TomoScanBase):
raise TypeError(
f"scan is expected to be an instance of {TomoScanBase} not {type(scan)}"
)
entry = getattr(scan, "entry", "entry")
return "/".join([entry, "reconstruction"])
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684328153.0
tomoscan-1.2.2/tomoscan/esrf/volume/jp2kvolume.py 0000644 0236253 0006511 00000021341 00000000000 022235 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module defining utils for a jp2k volume"""
__authors__ = ["H. Payno", "P. Paleo"]
__license__ = "MIT"
__date__ = "27/01/2022"
from typing import Optional
import os
import numpy
from tomoscan.esrf.identifier.jp2kidentifier import JP2KVolumeIdentifier
from tomoscan.scanbase import TomoScanBase
from .singleframebase import VolumeSingleFrameBase
from silx.io.url import DataUrl
from packaging.version import parse as parse_version
from tomoscan.utils import docstring
import logging
try:
import glymur # noqa #F401 needed for later possible lazy loading
except ImportError:
has_glymur = False
has_minimal_openjpeg = False
glymur_version = None
openjpeg_version = None
else:
has_glymur = True
from glymur import set_option as glymur_set_option
from glymur.version import openjpeg_version, version as glymur_version
if openjpeg_version < "2.3.0":
has_minimal_openjpeg = False
else:
has_minimal_openjpeg = True
_logger = logging.getLogger(__name__)
_MISSING_GLYMUR_MSG = "Fail to import glymur. won't be able to load / save volume to jp2k. You can install it by calling pip."
class JP2KVolume(VolumeSingleFrameBase):
"""
Save volume data to single frame jp2k files and metadata to .txt file
:param Optional[list] cratios: list of ints. compression ratio for each jpeg2000 layer
:param Optional[list] psnr: list of int.
The PSNR (Peak Signal-to-Noise ratio) for each jpeg2000 layer.
This defines a quality metric for lossy compression.
The number "0" stands for lossless compression.
:param Optional[int] n_threads: number of thread to use for writing. If None will try to get as much as possible
:warning: each file saved under {volume_basename}_{index_zfill6}.jp2k is considered to be a slice of the volume.
"""
DEFAULT_DATA_EXTENSION = "jp2"
DEFAULT_DATA_SCHEME = "glymur"
def __init__(
self,
folder: Optional[str] = None,
volume_basename: Optional[str] = None,
data: Optional[numpy.ndarray] = None,
source_scan: Optional[TomoScanBase] = None,
metadata: Optional[dict] = None,
data_url: Optional[DataUrl] = None,
metadata_url: Optional[DataUrl] = None,
overwrite: bool = False,
start_index=0,
data_extension=DEFAULT_DATA_EXTENSION,
metadata_extension=VolumeSingleFrameBase.DEFAULT_METADATA_EXTENSION,
cratios: Optional[list] = None,
psnr: Optional[list] = None,
n_threads: Optional[int] = None,
) -> None:
if folder is not None:
url = DataUrl(
file_path=str(folder),
data_path=None,
)
else:
url = None
super().__init__(
url=url,
data=data,
volume_basename=volume_basename,
source_scan=source_scan,
metadata=metadata,
data_url=data_url,
metadata_url=metadata_url,
overwrite=overwrite,
start_index=start_index,
data_extension=data_extension,
metadata_extension=metadata_extension,
)
if not has_glymur:
_logger.warning(_MISSING_GLYMUR_MSG)
else:
if not has_minimal_openjpeg:
_logger.warning(
"You must have at least version 2.3.0 of OpenJPEG "
"in order to write jp2k images."
)
self._cratios = cratios
self._psnr = psnr
self.setup_multithread_encoding(n_threads=n_threads)
@property
def cratios(self) -> Optional[list]:
return self._cratios
@cratios.setter
def cratios(self, cratios: Optional[list]):
self._cratios = cratios
@property
def psnr(self) -> Optional[list]:
return self._psnr
@psnr.setter
def psnr(self, psnr: Optional[list]):
self._psnr = psnr
@docstring(VolumeSingleFrameBase)
def save_frame(self, frame, file_name, scheme):
if not has_glymur:
raise RuntimeError(_MISSING_GLYMUR_MSG)
if scheme == "glymur":
glymur.Jp2k(file_name, data=frame, psnr=self.psnr, cratios=self.cratios)
else:
raise ValueError(f"Scheme {scheme} is not handled")
@docstring(VolumeSingleFrameBase)
def load_frame(self, file_name, scheme):
if not has_glymur:
raise RuntimeError(_MISSING_GLYMUR_MSG)
if scheme == "glymur":
jp2_file = glymur.Jp2k(file_name)
return jp2_file[:]
else:
raise ValueError(f"Scheme {scheme} is not handled")
@staticmethod
def setup_multithread_encoding(n_threads=None, what_if_not_available="ignore"):
"""
Setup OpenJpeg multi-threaded encoding.
Parameters
-----------
n_threads: int, optional
Number of threads. If not provided, all available threads are used.
na: str, optional
What to do if requirements are not fulfilled. Possible values are:
- "ignore": do nothing, proceed
- "print": show an information message
- "raise": raise an error
"""
required_glymur_version = "0.9.3"
required_openjpeg_version = "2.4.0"
def not_available(msg):
if what_if_not_available == "raise":
raise ValueError(msg)
elif what_if_not_available == "print":
print(msg)
if not has_glymur:
not_available(f"glymur not installed. {required_glymur_version} required")
return
elif parse_version(glymur_version) < parse_version(required_glymur_version):
not_available(
f"glymur >= {required_glymur_version} is required for multi-threaded encoding (current version: {glymur_version})"
)
return
elif not has_minimal_openjpeg:
not_available(
f"libopenjpeg >= {required_openjpeg_version} is required for multi-threaded encoding (current version: {openjpeg_version})"
)
return
if n_threads is None:
n_threads = get_available_threads()
glymur_set_option("lib.num_threads", n_threads)
@staticmethod
@docstring(VolumeSingleFrameBase)
def from_identifier(identifier):
"""Return the Dataset from a identifier"""
if not isinstance(identifier, JP2KVolumeIdentifier):
raise TypeError(
f"identifier should be an instance of {JP2KVolumeIdentifier}"
)
return JP2KVolume(
folder=identifier.folder,
volume_basename=identifier.file_prefix,
)
@docstring(VolumeSingleFrameBase)
def get_identifier(self) -> JP2KVolumeIdentifier:
if self.url is None:
raise ValueError("no file_path provided. Cannot provide an identifier")
return JP2KVolumeIdentifier(
object=self, folder=self.url.file_path(), file_prefix=self._volume_basename
)
@staticmethod
def example_defined_from_str_identifier() -> str:
return " ; ".join(
[
f"{JP2KVolume(folder='/path/to/my/my_folder').get_identifier().to_str()}",
f"{JP2KVolume(folder='/path/to/my/my_folder', volume_basename='mybasename').get_identifier().to_str()} (if mybasename != folder name)",
]
)
def get_available_threads():
return len(os.sched_getaffinity(0))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/volume/mock.py 0000644 0236253 0006511 00000005732 00000000000 021076 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module to mock volume"""
from typing import Sized, Union
import numpy
from silx.utils.enum import Enum as _Enum
from silx.image.phantomgenerator import PhantomGenerator
class Scene(_Enum):
SHEPP_LOGAN = "Shepp-Logan"
def create_volume(
frame_dims: Union[int, tuple], z_size: int, scene: Scene = Scene.SHEPP_LOGAN
) -> numpy.ndarray:
"""
create a numpy array of the requested schene for a total of frames_dimes*z_size elements
:param tuple frame_dims: 2d tuple of frame dimensions
:param int z_size: number of elements on the volume on z axis
:param Scene scene: scene to compose
"""
scene = Scene.from_value(scene)
if not isinstance(z_size, int):
raise TypeError(
f"z_size is expected to be an instance of int not {type(z_size)}"
)
if scene is Scene.SHEPP_LOGAN:
if isinstance(frame_dims, Sized):
if not len(frame_dims) == 2:
raise ValueError(
f"frame_dims is expected to be an integer or a list of two integers. Not {frame_dims}"
)
if frame_dims[0] != frame_dims[1]:
raise ValueError(
f"{scene} only handle square frame. Frame width and height should be the same"
)
else:
dim = frame_dims[0]
elif isinstance(frame_dims, int):
dim = frame_dims
else:
raise TypeError(
f"frame_dims is expected to be a list of two integers or an integer. Not {frame_dims}"
)
return numpy.asarray(
[PhantomGenerator.get2DPhantomSheppLogan(dim) * 10000.0] * z_size
)
else:
raise NotImplementedError
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/volume/rawvolume.py 0000644 0236253 0006511 00000041225 00000000000 022163 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module defining utils for an .vol volume (also know as raw)"""
__authors__ = ["H. Payno", "P. Paleo"]
__license__ = "MIT"
__date__ = "10/01/2023"
import sys
import os
from xml.dom.minidom import parseString as parse_xml_string
import logging
from typing import Optional
import numpy
import h5py
from dicttoxml import dicttoxml
from silx.io.url import DataUrl
from silx.io.dictdump import dicttoini
from tomoscan.scanbase import TomoScanBase
from tomoscan.volumebase import VolumeBase
from tomoscan.esrf.identifier.rawidentifier import RawVolumeIdentifier
from tomoscan.utils import docstring
_logger = logging.getLogger(__name__)
class RawVolume(VolumeBase):
"""
Volume where data si saved under .vol binary file and metadata are saved in .vol.info and or .vol.xml
Note: for now reading information from the .xml is not managed. We expect to write one or both and read from the text file (.vol.info)
Warning: meant as legacy for pyhst .vol file and existing post processing tool. We mostly expect software to write .vol file.
"""
DEFAULT_DATA_SCHEME = "raw"
DEFAULT_DATA_EXTENSION = "vol"
DEFAULT_METADATA_SCHEME = "info"
DEFAULT_METADATA_EXTENSION = "vol.info"
def __init__(
self,
file_path: Optional[str] = None,
data: Optional[numpy.ndarray] = None,
source_scan: Optional[TomoScanBase] = None,
metadata: Optional[dict] = None,
data_url: Optional[DataUrl] = None,
metadata_url: Optional[DataUrl] = None,
overwrite: bool = False,
append: bool = False,
data_extension=DEFAULT_DATA_EXTENSION,
metadata_extension=DEFAULT_METADATA_EXTENSION,
) -> None:
if file_path is not None:
url = DataUrl(file_path=file_path, data_path=None, scheme="raw")
self._file_path = file_path
super().__init__(
url=url,
data=data,
source_scan=source_scan,
metadata=metadata,
data_url=data_url,
metadata_url=metadata_url,
overwrite=overwrite,
data_extension=data_extension,
metadata_extension=metadata_extension,
)
self.append = append
@property
def data_extension(self):
if self.data_url is not None and self.data_url.file_path() is not None:
return os.path.splitext(self.data_url.file_path())[1]
@property
def metadata_extension(self):
if self.metadata_url is not None and self.metadata_url.file_path() is not None:
return os.path.splitext(self.metadata_url.file_path())[1]
@VolumeBase.data.setter
def data(self, data):
if not isinstance(data, (numpy.ndarray, type(None), h5py.VirtualLayout)):
raise TypeError(
f"data is expected to be None or a numpy array not {type(data)}"
)
if isinstance(data, numpy.ndarray) and data.ndim != 3:
raise ValueError(f"data is expected to be 3D and not {data.ndim}D.")
self._data = data
@property
def file_path(self):
return self._file_path
@file_path.setter
def file_path(self, file_path: Optional[str]):
if not (file_path is None or isinstance(file_path, str)):
raise TypeError
self._file_path = file_path
self.url = DataUrl(file_path=file_path, data_path=None, scheme="raw")
@docstring(VolumeBase)
def deduce_data_and_metadata_urls(self, url: Optional[DataUrl]) -> tuple:
if url is None:
return None, None
else:
if url.data_slice() is not None:
raise ValueError(f"data_slice is not handled by the {RawVolume}")
file_path = url.file_path()
data_path = url.data_path()
if data_path is not None:
raise ValueError("data_path is not handle by the .vol volume.")
scheme = url.scheme() or "raw"
metadata_info_file = os.path.splitext(url.file_path())[0] + ".vol.info"
return (
# data url
DataUrl(
file_path=file_path,
data_path=None,
scheme=scheme,
),
# medata url
DataUrl(
file_path=metadata_info_file,
data_path=None,
scheme=self.DEFAULT_METADATA_SCHEME,
),
)
@docstring(VolumeBase)
def save_data(self, url: Optional[DataUrl] = None, **kwargs) -> None:
if self.data is None:
return
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
if url.scheme() != "raw":
raise ValueError("Unsupported scheme - please use scheme='raw'")
if url.data_path() is not None:
raise ValueError("No data path expected. Unagleto save data")
_logger.info(f"save data to {url.path()}")
if self.data.dtype != numpy.float32:
raise TypeError(".vol format only takes float32 as data type")
# check endianness: make sure data is lowbytefirst
if self.data.dtype.byteorder == ">" or (
self.data.dtype.byteorder == "=" and sys.byteorder != "little"
):
# lowbytefirst
raise TypeError("data is expected to be byteorder: low byte first")
if self.data.ndim == 3:
data = self.data
elif self.data.ndim == 2:
data = self.data.reshape(1, self.data.shape[0], self.data.shape[1])
else:
raise ValueError(f"data should be 3D and not {self.data.ndim}D")
file_mode = "ab" if self.append else "wb"
with open(url.file_path(), file_mode) as fdesc:
if self.append:
n_bytes = os.path.getsize(url.file_path())
fdesc.seek(n_bytes)
data.tofile(fdesc)
@docstring(VolumeBase)
def load_data(self, url: Optional[DataUrl] = None, store: bool = True) -> None:
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
if self.metadata is None:
# for .vol file we need metadata to get shape - expected in a .vol.info file
metadata = self.load_metadata(store=False)
else:
metadata = self.metadata
dimX = metadata.get("NUM_X", None)
dimY = metadata.get("NUM_Y", None)
dimZ = metadata.get("NUM_Z", None)
byte_order = metadata.get("BYTEORDER", "LOWBYTEFIRST")
if byte_order.lower() == "highbytefirst":
byte_order = ">"
elif byte_order.lower() == "lowbytefirst":
byte_order = "<"
else:
raise ValueError(f"Unable to interpret byte order value: {byte_order}")
if dimX is None or dimY is None or dimZ is None:
_logger.error(f"Unable to get volume shape (get: {dimZ, dimY, dimZ} )")
data = None
else:
shape = (int(dimZ), int(dimY), int(dimX))
try:
data_type = numpy.dtype(byte_order + "f")
data = numpy.fromfile(
url.file_path(), dtype=data_type, count=-1, sep=""
)
except Exception as e:
_logger.warning(
f"Fail to load data from {url.file_path()}. Error is {e}."
)
data = None
else:
data = data.reshape(shape)
if store is True:
self.data = data
return data
@docstring(VolumeBase)
def save_metadata(self, url: Optional[DataUrl] = None, store: bool = True) -> None:
"""
:raises KeyError: if data path already exists and overwrite set to False
:raises ValueError: if data is None
"""
if self.metadata is None:
raise ValueError("No metadata to be saved")
url = url or self.metadata_url
if url is None:
raise ValueError(
"Cannot get metadata_url. An url should be provided. Don't know where to save this."
)
_logger.info(f"save metadata to {url.path()}")
if url.scheme() == "info":
metadata_file = url.file_path()
_logger.info(f"save data to {metadata_file}")
if len(self.metadata) > 0:
# same as ini but no section. Write works but read fails
dicttoini(self.metadata, metadata_file)
elif url.scheme() == "lxml":
metadata_file = url.file_path()
_logger.info(f"save data to {metadata_file}")
if len(self.metadata) > 0:
# Format metadata to a XML file, with a format that can be read by imagej.
# Does not make sense to you ? For us neither!
size_xyz = [
int(self.metadata.get(key, 0))
for key in ["NUM_X", "NUM_Y", "NUM_Z"]
]
if size_xyz == 0:
_logger.error(
"Something wrong with NUM_X, NUM_Y or NUM_X: missing or zero ?"
)
metadata_for_xml = {
"reconstruction": {
"idAc": "N_A_",
"listSubVolume": {
"subVolume": {
"SUBVOLUME_NAME": os.path.basename(
self.data_url.file_path()
),
"SIZEX": size_xyz[0],
"SIZEY": size_xyz[1],
"SIZEZ": size_xyz[2],
"ORIGINX": 1,
"ORIGINY": 1,
"ORIGINZ": 1,
"DIM_REC": numpy.prod(size_xyz),
"BYTE_ORDER": "LOWBYTEFIRST", # !
}
},
}
}
for what in ["voxelSize", "ValMin", "ValMax", "s1", "s2", "S1", "S2"]:
metadata_for_xml["reconstruction"]["listSubVolume"]["subVolume"][
what
] = float(self.metadata.get(what, 0.0))
xml_str = dicttoxml(
metadata_for_xml,
custom_root="tomodb2",
xml_declaration=False,
attr_type=False,
return_bytes=False,
)
xml_str_pretty = parse_xml_string(xml_str).toprettyxml(indent=" ")
with open(metadata_file, mode="w") as file_:
file_.write(xml_str_pretty)
else:
raise ValueError(f"scheme {url.scheme()} is not handled")
@docstring(VolumeBase)
def load_metadata(self, url: Optional[DataUrl] = None, store: bool = True) -> dict:
url = url or self.metadata_url
if url is None:
raise ValueError(
"Cannot get metadata_url. An url should be provided. Don't know where to save this."
)
if url.scheme() == "info":
def info_file_to_dict(info_file):
ddict = {}
with open(info_file, "r") as _file:
lines = _file.readlines()
for line in lines:
if "=" not in line:
continue
_line = line.rstrip().replace(" ", "")
_line = _line.split("#")[0]
key, value = _line.split("=")
ddict[key] = value
return ddict
metadata_file = url.file_path()
if url.data_path() is not None:
raise ValueError("data_path is not handled by ini scheme")
else:
try:
metadata = info_file_to_dict(metadata_file)
except FileNotFoundError:
_logger.warning(f"unable to load metadata from {metadata_file}")
metadata = {}
else:
raise ValueError(f"scheme {url.scheme()} is not handled")
if store:
self.metadata = metadata
return metadata
def browse_metadata_files(self, url=None):
"""
return a generator go through all the existings files associated to the data volume
"""
url = url or self.metadata_url
if url is None:
return
elif url.file_path() is not None and os.path.exists(url.file_path()):
yield url.file_path()
def browse_data_files(self, url=None):
"""
return a generator go through all the existings files associated to the data volume
"""
url = url or self.data_url
if url is None:
return
elif url.file_path() is not None and os.path.exists(url.file_path()):
yield url.file_path()
def browse_data_urls(self, url=None):
url = url or self.data_url
if url is not None and os.path.exists(url.file_path()):
yield url
@docstring(VolumeBase)
def data_file_saver_generator(
self, n_frames, data_url: DataUrl, overwrite: bool, mode: str = "a", **kwargs
):
"""
warning: the file will be open until the generator exists
"""
class _FrameDumper:
"""
will not work for VirtualLayout
"""
Dataset = None
# shared dataset
def __init__(
self,
fid,
) -> None:
self._fid = fid
def __setitem__(self, key, value):
if key != slice(None, None, None):
raise ValueError("item setting only handle ':' for now")
if not isinstance(value, numpy.ndarray):
raise TypeError(
"value is expected to be an instance of numpy.ndarray"
)
value.tofile(self._fid)
if (
data_url.file_path() is not None
and os.path.dirname(data_url.file_path()) != ""
):
os.makedirs(os.path.dirname(data_url.file_path()), exist_ok=True)
with open(self.data_url.file_path(), "wb") as fid:
for _ in range(n_frames):
yield _FrameDumper(fid=fid)
@staticmethod
@docstring(VolumeBase)
def from_identifier(identifier):
"""Return the Dataset from a identifier"""
if not isinstance(identifier, RawVolumeIdentifier):
raise TypeError(
f"identifier should be an instance of {RawVolumeIdentifier}"
)
return RawVolume(
file_path=identifier.file_path,
)
@docstring(VolumeBase)
def get_identifier(self) -> RawVolumeIdentifier:
if self.url is None:
raise ValueError("no file_path provided. Cannot provide an identifier")
return RawVolumeIdentifier(object=self, file_path=self.url.file_path())
@staticmethod
def example_defined_from_str_identifier() -> str:
"""example as string to explain how users can defined identifiers from a string"""
return " ; ".join(
[
f"{RawVolume(file_path='/path/to/my/my_volume.vol').get_identifier().to_str()}",
]
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/volume/singleframebase.py 0000644 0236253 0006511 00000035410 00000000000 023270 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module defining utils for a jp2k volume"""
__authors__ = ["H. Payno", "P. Paleo"]
__license__ = "MIT"
__date__ = "27/01/2022"
from typing import Optional
import os
import re
import numpy
from tomoscan.scanbase import TomoScanBase
from tomoscan.volumebase import VolumeBase
from silx.io.url import DataUrl
from silx.io.dictdump import dicttoini, load as load_ini
from tomoscan.utils import docstring
import logging
_logger = logging.getLogger(__name__)
class VolumeSingleFrameBase(VolumeBase):
"""
Base class for Volume where each slice is saved in a separate file like edf, jp2k or tiff.
:param int start_index: users can provide a shift on fill name when saving the file. This is interesting if you want to create
create a volume from several writer.
"""
DEFAULT_DATA_SCHEME = None
DEFAULT_DATA_PATH_PATTERN = "{volume_basename}_{index_zfill6}.{data_extension}"
DEFAULT_METADATA_EXTENSION = "txt"
# information regarding metadata
DEFAULT_METADATA_SCHEME = "ini"
DEFAULT_METADATA_PATH_PATTERN = "{volume_basename}_infos.{metadata_extension}"
def __init__(
self,
url: Optional[DataUrl] = None,
data: Optional[numpy.ndarray] = None,
source_scan: Optional[TomoScanBase] = None,
metadata: Optional[dict] = None,
data_url: Optional[DataUrl] = None,
metadata_url: Optional[DataUrl] = None,
overwrite: bool = False,
start_index: int = 0,
volume_basename: Optional[str] = None,
data_extension=None,
metadata_extension="txt",
) -> None:
self._volume_basename = volume_basename
super().__init__(
url,
data,
source_scan,
metadata,
data_url,
metadata_url,
overwrite,
data_extension,
metadata_extension,
)
self._start_index = start_index
@property
def start_index(self) -> int:
return self._start_index
def get_volume_basename(self, url=None):
if self._volume_basename is not None:
return self._volume_basename
else:
url = url or self.data_url
return os.path.basename(url.file_path())
@docstring(VolumeBase)
def deduce_data_and_metadata_urls(self, url: Optional[DataUrl]) -> tuple:
"""
Deduce automatically data and metadata url.
Default data will be saved as single frame edf.
Default metadata will be saved as a text file
"""
if url is None:
return None, None
else:
metadata_keywords = {
"volume_basename": self.get_volume_basename(url),
"metadata_extension": self.metadata_extension,
}
metadata_data_path = self.DEFAULT_METADATA_PATH_PATTERN.format(
**metadata_keywords
)
return (
# data url
DataUrl(
file_path=url.file_path(),
data_path=self.DEFAULT_DATA_PATH_PATTERN,
scheme=url.scheme() or self.DEFAULT_DATA_SCHEME,
data_slice=url.data_slice(),
),
# medata url
DataUrl(
file_path=url.file_path(),
data_path=metadata_data_path,
scheme=url.scheme() or self.DEFAULT_METADATA_SCHEME,
),
)
@docstring(VolumeBase)
def load_metadata(self, url: Optional[DataUrl] = None, store: bool = True) -> dict:
url = url or self.metadata_url
if url is None:
raise ValueError(
"Cannot get metadata_url. An url should be provided. Don't know where to save this."
)
if url.scheme() == "ini":
metadata_file = url.file_path()
if url.data_path() is not None:
metadata_file = os.path.join(metadata_file, url.data_path())
_logger.info(f"load data to {metadata_file}")
try:
metadata = load_ini(metadata_file, "ini")
except FileNotFoundError:
_logger.warning(
f"unable to load metadata from {metadata_file} - File not found"
)
metadata = {}
except Exception as e:
_logger.error(
f"Failed to load metadata from {metadata_file}. Error is {e}"
)
metadata = {}
else:
raise ValueError(f"scheme {url.scheme()} is not handled")
if store:
self.metadata = metadata
return metadata
@docstring(VolumeBase)
def save_metadata(self, url: Optional[DataUrl] = None) -> None:
if self.metadata is None:
raise ValueError("No data to be saved")
url = url or self.metadata_url
if url is None:
raise ValueError(
"Cannot get metadata_url. An url should be provided. Don't know where to save this."
)
else:
if url.scheme() == "ini":
metadata_file = url.file_path()
if url.data_path() is not None:
metadata_file = os.path.join(metadata_file, url.data_path())
_logger.info(f"save data to {metadata_file}")
if len(self.metadata) > 0:
dicttoini(self.metadata, metadata_file)
else:
raise ValueError(f"scheme {url.scheme()} is not handled")
# utils to format file path
def format_data_path_for_data(
self, data_path: str, index: int, volume_basename: str
) -> str:
"""
Return file path to save the frame at `index` of the current volume
"""
keywords = {
"index_zfill4": str(index + self.start_index).zfill(4),
"index_zfill6": str(index + self.start_index).zfill(6),
"volume_basename": volume_basename,
"data_extension": self.data_extension,
}
return data_path.format(**keywords)
def get_data_path_pattern_for_data(
self, data_path: str, volume_basename: str
) -> str:
"""
Return file path **pattern** (and not full path) to load data.
For example in edf it can return 'myacquisition_*.edf' in order to be handled by
"""
keywords = {
"index_zfill4": "[0-9]{3,4}",
"index_zfill6": "[0-9]{3,6}",
"volume_basename": volume_basename,
"data_extension": self.data_extension,
}
return data_path.format(**keywords)
@docstring(VolumeBase)
def save_data(self, url: Optional[DataUrl] = None) -> None:
if self.data is None:
raise ValueError("No data to be saved")
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
else:
_logger.info(f"save data to {url.path()}")
# if necessary create output directory (some third part writer does not do it for us)
try:
os.makedirs(url.file_path(), exist_ok=True)
except FileNotFoundError:
# can raise FileNotFoundError if file path is '.' for example
pass
assert self.data.ndim == 3
for frame, frame_dumper in zip(
self.data,
self.data_file_saver_generator(
n_frames=self.data.shape[0], data_url=url, overwrite=self.overwrite
),
):
frame_dumper[:] = frame
def data_file_name_generator(self, n_frames, data_url):
"""
browse output files for n_frames
"""
for i_frame in range(n_frames):
file_name = self.format_data_path_for_data(
data_url.data_path(),
index=i_frame,
volume_basename=self.get_volume_basename(data_url),
)
file_name = os.path.join(data_url.file_path(), file_name)
yield file_name
@docstring(VolumeBase)
def data_file_saver_generator(self, n_frames, data_url: DataUrl, overwrite: bool):
class _FrameDumper:
def __init__(self, url_scheme, file_name, callback) -> None:
self.url_scheme = url_scheme
self.file_name = file_name
self.overwrite = overwrite
self.__callback = callback
def __setitem__(self, key, value):
if not self.overwrite and os.path.exists(self.file_name):
raise OSError(
f"{self.file_name} already exists. If you want you can ask for the volume to overwriting existing files."
)
if key != slice(None, None, None):
raise ValueError("item setting only handle ':' for now")
self.__callback(
frame=value, file_name=file_name, scheme=self.url_scheme
)
os.makedirs(data_url.file_path(), exist_ok=True)
for file_name in self.data_file_name_generator(
n_frames=n_frames, data_url=data_url
):
yield _FrameDumper(
file_name=file_name,
url_scheme=data_url.scheme(),
callback=self.save_frame,
)
def get_volume_shape(self, url=None):
if self.data is not None:
return self.data.shape
else:
first_slice = next(self.browse_slices(url=url))
n_slices = len(tuple(self.browse_data_urls()))
return n_slices, first_slice.shape[0], first_slice.shape[1]
@docstring(VolumeBase)
def load_data(
self, url: Optional[DataUrl] = None, store: bool = True
) -> numpy.ndarray:
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
data = list(self.browse_slices(url=url))
if data == []:
data = None
_logger.warning(
f"Failed to load any data for {self.get_identifier().short_description}"
)
else:
data = numpy.asarray(data)
if data.ndim != 3:
raise ValueError(f"data is expected to be 3D not {data.ndim}.")
if store:
self.data = data
return data
def save_frame(self, frame: numpy.ndarray, file_name: str, scheme: str):
"""
Function dedicated for volune saving each frame on a single file
:param numpy.ndarray frame: frame to be save
:param str file_name: path to store the data
:param str scheme: scheme to save the data
"""
raise NotImplementedError("Base class")
def load_frame(self, file_name: str, scheme: str) -> numpy.ndarray:
"""
Function dedicated for volune saving each frame on a single file
:param str file_name: path to store the data
:param str scheme: scheme to save the data
"""
raise NotImplementedError("Base class")
@docstring(VolumeBase)
def browse_metadata_files(self, url=None):
url = url or self.metadata_url
if url is None:
return
elif url.file_path() is not None:
if url.scheme() == "ini":
metadata_file = url.file_path()
if url.data_path() is not None:
metadata_file = os.path.join(metadata_file, url.data_path())
if os.path.exists(metadata_file):
yield metadata_file
else:
raise ValueError(f"scheme {url.scheme()} is not handled")
@docstring(VolumeBase)
def browse_data_files(self, url=None):
url = url or self.data_url
if url is None:
return
research_pattern = self.get_data_path_pattern_for_data(
url.data_path(), volume_basename=self.get_volume_basename(url)
)
try:
research_pattern = re.compile(research_pattern)
except Exception:
_logger.error(
f"Fail to compute regular expresion for {research_pattern}. Unable to load data"
)
return None
# use case of a single file
if not os.path.exists(url.file_path()):
return
elif os.path.isfile(url.file_path()):
yield url.file_path()
else:
for file_ in sorted(os.listdir(url.file_path())):
if research_pattern.match(file_):
full_file_path = os.path.join(url.file_path(), file_)
yield full_file_path
@docstring(VolumeBase)
def browse_data_urls(self, url=None):
url = url or self.data_url
for data_file in self.browse_data_files(url=url):
yield DataUrl(
file_path=data_file,
scheme=url.scheme(),
)
@docstring(VolumeBase)
def browse_slices(self, url=None):
if url is None and self.data is not None:
for slice in self.data:
yield slice
else:
url = url or self.data_url
if url is None:
raise ValueError(
"No data and data_url know and no url provided. Uanble to browse slices"
)
for file_path in self.browse_data_files(url=url):
yield self.load_frame(file_name=file_path, scheme=url.scheme())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/volume/tiffvolume.py 0000644 0236253 0006511 00000042061 00000000000 022321 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module defining utils for a tiff volume"""
__authors__ = ["H. Payno", "P. Paleo"]
__license__ = "MIT"
__date__ = "01/02/2022"
from typing import Optional
import numpy
from tomoscan.esrf.identifier.tiffidentifier import (
MultiTiffVolumeIdentifier,
TIFFVolumeIdentifier,
)
from tomoscan.scanbase import TomoScanBase
from tomoscan.esrf.volume.singleframebase import VolumeSingleFrameBase
from tomoscan.utils import docstring, get_subvolume_shape
from silx.io.url import DataUrl
from silx.io.dictdump import dicttoini, load as load_ini
import os
from tomoscan.volumebase import VolumeBase
try:
import tifffile # noqa #F401 needed for later possible lazy loading
except ImportError:
has_tifffile = False
else:
has_tifffile = True
from tifffile import TiffWriter
from tifffile import TiffFile
import logging
_logger = logging.getLogger(__name__)
def check_has_tiffle_file(handle_mode: str):
assert handle_mode in ("warning", "raises")
if not has_tifffile:
message = "Unable to import `tifffile`. Unable to load or save tiff file. You can use pip to install it"
if handle_mode == "message":
_logger.warning(message)
elif handle_mode == "raises":
raise ValueError(message)
class TIFFVolume(VolumeSingleFrameBase):
"""
Save volume data to single frame tiff and metadata to .txt files
:warning: each file saved under {volume_basename}_{index_zfill6}.tiff is considered to be a slice of the volume.
"""
DEFAULT_DATA_EXTENSION = "tiff"
DEFAULT_DATA_SCHEME = "tifffile"
def __init__(
self,
folder: Optional[str] = None,
volume_basename: Optional[str] = None,
data: Optional[numpy.ndarray] = None,
source_scan: Optional[TomoScanBase] = None,
metadata: Optional[dict] = None,
data_url: Optional[DataUrl] = None,
metadata_url: Optional[DataUrl] = None,
overwrite: bool = False,
start_index=0,
data_extension=DEFAULT_DATA_EXTENSION,
metadata_extension=VolumeSingleFrameBase.DEFAULT_METADATA_EXTENSION,
) -> None:
if folder is not None:
url = DataUrl(
file_path=str(folder),
data_path=None,
)
else:
url = None
super().__init__(
url=url,
volume_basename=volume_basename,
data=data,
source_scan=source_scan,
metadata=metadata,
data_url=data_url,
metadata_url=metadata_url,
overwrite=overwrite,
start_index=start_index,
data_extension=data_extension,
metadata_extension=metadata_extension,
)
check_has_tiffle_file("warning")
@docstring(VolumeSingleFrameBase)
def save_frame(self, frame, file_name, scheme):
check_has_tiffle_file("raises")
if scheme == "tifffile":
tiff_writer = TiffWriter(file_name)
tiff_writer.write(frame)
else:
raise ValueError(f"scheme {scheme} is not handled")
@docstring(VolumeSingleFrameBase)
def load_frame(self, file_name, scheme) -> numpy.ndarray:
check_has_tiffle_file("raises")
if scheme == "tifffile":
return tifffile.imread(file_name)
else:
raise ValueError(f"scheme {scheme} is not handled")
# identifier section
@staticmethod
@docstring(VolumeSingleFrameBase)
def from_identifier(identifier):
"""Return the Dataset from a identifier"""
if not isinstance(identifier, TIFFVolumeIdentifier):
raise TypeError(
f"identifier should be an instance of {TIFFVolumeIdentifier} not {type(identifier)}"
)
return TIFFVolume(
folder=identifier.folder,
volume_basename=identifier.file_prefix,
)
@docstring(VolumeSingleFrameBase)
def get_identifier(self) -> TIFFVolumeIdentifier:
if self.url is None:
raise ValueError("no file_path provided. Cannot provide an identifier")
return TIFFVolumeIdentifier(
object=self, folder=self.url.file_path(), file_prefix=self._volume_basename
)
@staticmethod
def example_defined_from_str_identifier() -> str:
return " ; ".join(
[
f"{TIFFVolume(folder='/path/to/my/my_folder').get_identifier().to_str()}",
f"{TIFFVolume(folder='/path/to/my/my_folder', volume_basename='mybasename').get_identifier().to_str()} (if mybasename != folder name)",
]
)
class MultiTIFFVolume(VolumeBase):
"""
Save tiff into a single tiff file
:param str file_path: path to the multiframe tiff file
"""
def __init__(
self,
file_path: Optional[str] = None,
data: Optional[numpy.ndarray] = None,
source_scan: Optional[TomoScanBase] = None,
metadata: Optional[dict] = None,
data_url: Optional[DataUrl] = None,
metadata_url: Optional[DataUrl] = None,
overwrite: bool = False,
append: bool = False,
) -> None:
if file_path is not None:
url = DataUrl(file_path=file_path)
else:
url = None
super().__init__(
url, data, source_scan, metadata, data_url, metadata_url, overwrite
)
check_has_tiffle_file("warning")
self.append = append
@docstring(VolumeBase)
def deduce_data_and_metadata_urls(self, url: Optional[DataUrl]) -> tuple:
# convention for tiff multiframe:
# expect the url to provide a path to a the tiff multiframe file. so data_url will be the same as url
# and the metadata_url will target a prefix_info.txt file with prefix is the tiff file prefix
if url is None:
return None, None
else:
if url.data_slice() is not None:
raise ValueError(f"data_slice is not handled by the {MultiTIFFVolume}")
file_path = url.file_path()
if url.data_path() is not None:
raise ValueError("data_path is not handled")
scheme = url.scheme() or "tifffile"
metadata_file = "_".join([os.path.splitext(file_path)[0], "infos.txt"])
return (
# data url
DataUrl(
file_path=url.file_path(),
scheme=scheme,
),
# medata url
DataUrl(
file_path=metadata_file,
scheme="ini",
),
)
@docstring(VolumeBase)
def save_data(self, url: Optional[DataUrl] = None) -> None:
"""
:raises KeyError: if data path already exists and overwrite set to False
:raises ValueError: if data is None
"""
# to be discussed. Not sure we should raise an error in this case. Could be usefull but this could also be double edged knife
if self.data is None:
raise ValueError("No data to be saved")
check_has_tiffle_file("raises")
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
if url.scheme() == "tifffile":
if url.data_path() is not None:
raise ValueError("No data path expected. Unagleto save data")
else:
_logger.info(f"save data to {url.path()}")
with TiffWriter(url.file_path(), bigtiff=True, append=self.append) as tif:
if self.data.ndim == 2:
tif.write(self.data)
elif self.data.ndim == 3:
for slice in self.data:
tif.write(slice)
else:
raise ValueError(f"data should be 3D and not {self.data.ndim}D")
else:
raise ValueError(f"Scheme {url.scheme()} is not handled")
@docstring(VolumeBase)
def data_file_saver_generator(self, n_frames, data_url: DataUrl, overwrite: bool):
"""
warning: the file will be open until the generator exists
"""
class _FrameDumper:
"""
will not work for VirtualLayout
"""
def __init__(self, url, append) -> None:
self.url = url
self.append = append
def __setitem__(self, key, value):
if self.url.scheme() == "tifffile":
if self.url.data_path() is not None:
raise ValueError("No data path expected. Unagleto save data")
else:
_logger.info(f"save data to {self.url.path()}")
if key != slice(None, None, None):
raise ValueError("item setting only handle ':' for now")
with TiffWriter(
self.url.file_path(), bigtiff=True, append=self.append
) as tif:
tif.write(value)
else:
raise ValueError(f"Scheme {self.url.scheme()} is not handled")
for i_frame in range(n_frames):
yield _FrameDumper(data_url, append=self.append if i_frame == 0 else True)
@docstring(VolumeBase)
def save_metadata(self, url: Optional[DataUrl] = None) -> None:
"""
:raises KeyError: if data path already exists and overwrite set to False
:raises ValueError: if data is None
"""
if self.metadata is None:
raise ValueError("No metadata to be saved")
check_has_tiffle_file("raises")
url = url or self.metadata_url
if url is None:
raise ValueError(
"Cannot get metadata_url. An url should be provided. Don't know where to save this."
)
_logger.info(f"save metadata to {url.path()}")
if url.scheme() == "ini":
if url.data_path() is not None:
raise ValueError("data_path is not handled by 'ini' scheme")
else:
dicttoini(
self.metadata,
url.file_path(),
)
else:
raise ValueError(f"Scheme {url.scheme()} is not handled by multiframe tiff")
@docstring(VolumeBase)
def load_data(
self, url: Optional[DataUrl] = None, store: bool = True
) -> numpy.ndarray:
url = url or self.data_url
if url is None:
raise ValueError(
"Cannot get data_url. An url should be provided. Don't know where to save this."
)
data = numpy.asarray([slice for slice in self.browse_slices(url=url)])
if store:
self.data = data
return data
@docstring(VolumeBase)
def load_metadata(self, url: Optional[DataUrl] = None, store: bool = True) -> dict:
url = url or self.metadata_url
if url is None:
raise ValueError(
"Cannot get metadata_url. An url should be provided. Don't know where to save this."
)
if url.scheme() == "ini":
metadata_file = url.file_path()
if url.data_path() is not None:
raise ValueError("data_path is not handled by ini scheme")
else:
try:
metadata = load_ini(metadata_file, "ini")
except FileNotFoundError:
_logger.warning(f"unable to load metadata from {metadata_file}")
metadata = {}
else:
raise ValueError(f"Scheme {url.scheme()} is not handled by multiframe tiff")
if store:
self.metadata = metadata
return metadata
@staticmethod
@docstring(VolumeBase)
def from_identifier(identifier):
"""Return the Dataset from a identifier"""
if not isinstance(identifier, MultiTiffVolumeIdentifier):
raise TypeError(
f"identifier should be an instance of {MultiTiffVolumeIdentifier}"
)
return MultiTIFFVolume(
file_path=identifier.file_path,
)
@docstring(VolumeBase)
def get_identifier(self) -> MultiTiffVolumeIdentifier:
if self.url is None:
raise ValueError("no file_path provided. Cannot provide an identifier")
return MultiTiffVolumeIdentifier(object=self, tiff_file=self.url.file_path())
def browse_metadata_files(self, url=None):
"""
return a generator go through all the existings files associated to the data volume
"""
url = url or self.metadata_url
if url is None:
return
elif url.file_path() is not None and os.path.exists(url.file_path()):
yield url.file_path()
def browse_data_files(self, url=None):
"""
return a generator go through all the existings files associated to the data volume
"""
url = url or self.data_url
if url is None:
return
elif url.file_path() is not None and os.path.exists(url.file_path()):
yield url.file_path()
def browse_data_urls(self, url=None):
url = url or self.data_url
for data_file in self.browse_data_files(url=url):
yield DataUrl(
file_path=data_file,
scheme=url.scheme(),
)
@docstring(VolumeBase)
def browse_slices(self, url=None):
if url is None and self.data is not None:
for slice in self.data:
yield slice
else:
url = url or self.data_url
if url is None:
raise ValueError(
"No data and data_url know and no url provided. Uanble to browse slices"
)
if url.scheme() == "tifffile":
if url.data_path() is not None:
raise ValueError("data_path is not handle by multiframe tiff")
url = url or self.data_url
reader = TiffFile(url.file_path())
for serie in reader.series:
data = serie.asarray()
if data.ndim == 3:
for slice in data:
yield slice
elif data.ndim == 2:
yield data
else:
raise ValueError("serie is expected to be 2D or 3D")
else:
raise ValueError(
f"Scheme {url.scheme()} is not handled by multiframe tiff"
)
def get_volume_shape(self, url=None):
if self.data is not None:
return self.data.shape
url = url or self.data_url
with tifffile.TiffFile(url.file_path()) as t:
shapes = [serie.shape for serie in t.series]
# assume that all series have the same dimensions for axis 1 and 2
vol_shape = (len(t.series), shapes[0][0], shapes[0][1])
return vol_shape
def _get_tiff_volume_dtype(self):
with tifffile.TiffFile(self.url.file_path()) as t:
dtype = t.series[0].dtype
# assume that dtype is the same for all series
return dtype
@docstring(VolumeBase)
def load_chunk(self, chunk, url=None):
vol_shape = self.get_volume_shape()
vol_dtype = self._get_tiff_volume_dtype()
chunk_shape = get_subvolume_shape(chunk, vol_shape)
data_chunk = numpy.zeros(chunk_shape, dtype=vol_dtype)
start_z = chunk[0].start or 0
for i, image in enumerate(self.browse_slices(url=url)):
if i >= start_z and i - start_z < chunk_shape[0]:
data_chunk[i - start_z, ...] = image[chunk[1:]]
return data_chunk
@staticmethod
def example_defined_from_str_identifier() -> str:
return (
MultiTIFFVolume(file_path="/path/to/tiff_file.tif")
.get_identifier()
.to_str()
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/esrf/volume/utils.py 0000644 0236253 0006511 00000020745 00000000000 021306 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""utils function for esrf volumes"""
__authors__ = [
"H. Payno",
]
__license__ = "MIT"
__date__ = "11/07/2022"
import os
import h5py
from tomoscan.esrf.volume.edfvolume import EDFVolume
from tomoscan.esrf.volume.hdf5volume import HDF5Volume
from tomoscan.esrf.volume.tiffvolume import MultiTIFFVolume, TIFFVolume
from tomoscan.esrf.volume.jp2kvolume import JP2KVolume
from tomoscan.esrf.volume.rawvolume import RawVolume
from tomoscan.esrf.identifier.edfidentifier import EDFVolumeIdentifier
from tomoscan.esrf.identifier.hdf5Identifier import HDF5VolumeIdentifier
from tomoscan.esrf.identifier.tiffidentifier import (
TIFFVolumeIdentifier,
MultiTiffVolumeIdentifier,
)
from tomoscan.esrf.identifier.jp2kidentifier import JP2KVolumeIdentifier
from tomoscan.esrf.identifier.rawidentifier import RawVolumeIdentifier
from tomoscan.io import HDF5File
from typing import Optional
import logging
_logger = logging.getLogger(__name__)
_DEFAULT_SCHEME_TO_VOL = {
EDFVolumeIdentifier.scheme: EDFVolume,
HDF5VolumeIdentifier.scheme: HDF5Volume,
TIFFVolumeIdentifier.scheme: TIFFVolume,
MultiTiffVolumeIdentifier.scheme: MultiTIFFVolume,
JP2KVolumeIdentifier.scheme: JP2KVolume,
RawVolumeIdentifier.scheme: RawVolume,
}
def guess_hdf5_volume_data_paths(file_path, data_path="/", depth=3) -> tuple:
"""
browse hdf5 file 'file_path' from 'data_path' on 'depth' level and check for possible defined volumes.
:param str file_path: file path to the hdf5 file to browse
:param str data_path: path in the file to start research
:param int depth: on which layer we should apply research
:return: tuple of data_path that could fit a volume
:rtype: tuple
"""
if not h5py.is_hdf5(file_path):
raise ValueError(f"{file_path} is not a hdf5 file path")
with HDF5File(filename=file_path, mode="r") as h5f:
group = h5f[data_path]
if isinstance(group, h5py.Group):
if HDF5Volume.DATA_DATASET_NAME in group:
return (data_path,)
elif depth > 0:
res = []
for key in group.keys():
res.extend(
guess_hdf5_volume_data_paths(
file_path=file_path,
data_path="/".join((data_path, key)).replace("//", "/"),
depth=depth - 1,
)
)
return tuple(res)
return tuple()
def guess_volumes(path, scheme_to_vol: Optional[dict] = None) -> tuple:
"""
from a file path or a folder path try to guess volume(s)
:param str path: file or folder path
:param dict scheme_to_vol: dict to know which constructor to call. Key if the scheme, value if the volume constructor.
usefull for libraries redefining volume or adding some like tomwer.
If none provided will take the tomoscan default one
:return: tuple of volume
:rtype: tuple
"""
if not os.path.exists(path):
raise OSError("path doesn't exists")
if scheme_to_vol is None:
scheme_to_vol = _DEFAULT_SCHEME_TO_VOL
if os.path.isfile(path):
if h5py.is_hdf5(path):
res = []
for data_path in guess_hdf5_volume_data_paths(path):
assert isinstance(data_path, str)
res.append(
scheme_to_vol[HDF5VolumeIdentifier.scheme](
file_path=path,
data_path=data_path,
)
)
return tuple(res)
elif path.lower().endswith((".tif", ".tiff")):
return (scheme_to_vol[MultiTiffVolumeIdentifier.scheme](file_path=path),)
elif path.lower().endswith((".vol", ".raw")):
return (scheme_to_vol[RawVolumeIdentifier.scheme](file_path=path),)
elif os.path.isdir(path):
most_common_extension = get_most_common_extension(path)
if most_common_extension is None:
return tuple()
basename = _guess_volume_basename(path, extension=most_common_extension)
if most_common_extension in ("tiff", "tif"):
return (
scheme_to_vol[TIFFVolumeIdentifier.scheme](
folder=path,
volume_basename=basename,
data_extension=most_common_extension,
),
)
elif most_common_extension in ("jp2", "jp2k"):
return (
scheme_to_vol[JP2KVolumeIdentifier.scheme](
folder=path,
volume_basename=basename,
data_extension=most_common_extension,
),
)
elif most_common_extension == "edf":
return (
scheme_to_vol[EDFVolumeIdentifier.scheme](
folder=path,
volume_basename=basename,
data_extension=most_common_extension,
),
)
else:
_logger.warning(
f"most common extension is {most_common_extension}. Unable to create a volume from it"
)
return tuple()
else:
raise NotImplementedError("guess_volumes only handle file and folder...")
def get_most_common_extension(folder_path):
if not os.path.isdir(folder_path):
raise ValueError(f"a folder path is expected. {folder_path} isn't")
extensions = {}
for file_path in os.listdir(folder_path):
_, ext = os.path.splitext(file_path)
ext = ext.lower().lstrip(".")
if ext in extensions:
extensions[ext] += 1
else:
extensions[ext] = 1
# filter not handled extensions
def is_valid_extension(extension):
return extension in ("edf", "tif", "tiff", "jp2", "jp2k")
extensions = {
key: value for (key, value) in extensions.items() if is_valid_extension(key)
}
if len(extensions) == 0:
_logger.warning(f"no valid extensions found in {folder_path}")
else:
sort_extensions = sorted(extensions.items(), key=lambda x: x[1], reverse=True)
return sort_extensions[0][0]
def _guess_volume_basename(folder_path, extension):
# list all the files matching the file and guessing the file parttern
files_to_check = []
possible_basenames = {}
for file_path in os.listdir(folder_path):
if file_path.lower().endswith(extension):
files_to_check.append(os.path.splitext(file_path)[0])
# the expected way to save those files is basename_XXXX with XXXX is the index over 4 char
basename = "_".join(file_path.split("_")[:-1])
if basename in possible_basenames:
possible_basenames[basename] += 1
else:
possible_basenames[basename] = 1
if len(possible_basenames) == 0:
_logger.warning(f"no valid basename found in {folder_path}")
else:
sort_basenames = sorted(
possible_basenames.items(), key=lambda x: x[1], reverse=True
)
if len(sort_basenames) > 1:
_logger.warning(
f"more than one basename found. Take the most probable one ({sort_basenames[0][0]})"
)
return sort_basenames[0][0]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/factory.py 0000644 0236253 0006511 00000023425 00000000000 017345 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
"""Contains the Factory class and dedicated functions"""
__authors__ = ["H.Payno"]
__license__ = "MIT"
__date__ = "27/02/2019"
from urllib.parse import urlparse
from tomoscan.esrf.identifier.jp2kidentifier import JP2KVolumeIdentifier
from tomoscan.esrf.identifier.tiffidentifier import (
MultiTiffVolumeIdentifier,
TIFFVolumeIdentifier,
)
from tomoscan.esrf.identifier.rawidentifier import RawVolumeIdentifier
from tomoscan.esrf.identifier.url_utils import split_path
from tomoscan.esrf.volume.edfvolume import EDFVolume
from tomoscan.esrf.volume.hdf5volume import HDF5Volume
from tomoscan.esrf.volume.jp2kvolume import JP2KVolume
from tomoscan.esrf.volume.tiffvolume import MultiTIFFVolume, TIFFVolume
from tomoscan.esrf.volume.rawvolume import RawVolume
from tomoscan.tomoobject import TomoObject
from .scanbase import TomoScanBase
from .esrf.scan.edfscan import EDFTomoScan
from .esrf.scan.hdf5scan import HDF5TomoScan
from .esrf.identifier.edfidentifier import EDFTomoScanIdentifier, EDFVolumeIdentifier
from .esrf.identifier.hdf5Identifier import HDF5TomoScanIdentifier, HDF5VolumeIdentifier
from tomoscan.identifier import BaseIdentifier, ScanIdentifier, VolumeIdentifier
from . import identifier as _identifier_mod
from typing import Union
import os
class Factory:
"""
Factory any TomoObject
"""
@staticmethod
def create_tomo_object_from_identifier(
identifier: Union[str, ScanIdentifier],
) -> TomoObject:
"""
Create an instance of TomoScanBase from his identifier if possible
:param str identifier: identifier of the TomoScanBase
:raises: TypeError if identifier is not a str
:raises: ValueError if identifier cannot be converted back to an instance of TomoScanBase
"""
if not isinstance(identifier, (str, BaseIdentifier)):
raise TypeError(
f"identifier is expected to be a str or an instance of {BaseIdentifier} not {type(identifier)}"
)
# step 1: convert identifier to an instance of BaseIdentifier if necessary
if isinstance(identifier, str):
info = urlparse(identifier)
paths = split_path(info.path)
scheme = info.scheme
if len(paths) == 1:
# insure backward compatibility. Originally (until 0.8) there was only one type which was scan
tomo_type = ScanIdentifier.TOMO_TYPE
elif len(paths) == 2:
tomo_type, _ = paths
else:
raise ValueError("Failed to parse path string:", info.path)
if tomo_type == _identifier_mod.VolumeIdentifier.TOMO_TYPE:
if scheme == "edf":
identifier = EDFVolumeIdentifier.from_str(identifier=identifier)
elif scheme == "hdf5":
identifier = HDF5VolumeIdentifier.from_str(identifier=identifier)
elif scheme == "tiff":
identifier = TIFFVolumeIdentifier.from_str(identifier=identifier)
elif scheme == "tiff3d":
identifier = MultiTiffVolumeIdentifier.from_str(
identifier=identifier
)
elif scheme == "jp2k":
identifier = JP2KVolumeIdentifier.from_str(identifier=identifier)
elif scheme == "raw":
identifier = RawVolumeIdentifier.from_str(identifier=identifier)
else:
raise ValueError(f"Scheme {scheme} is not recognized")
elif tomo_type == _identifier_mod.ScanIdentifier.TOMO_TYPE:
# otherwise consider this is a scan. Insure backward compatibility
if scheme == "edf":
identifier = EDFTomoScanIdentifier.from_str(identifier=identifier)
elif scheme == "hdf5":
identifier = HDF5TomoScanIdentifier.from_str(identifier=identifier)
else:
raise ValueError(f"Scheme {scheme} not recognized")
else:
raise ValueError(f"{tomo_type} is not an handled tomo type")
# step 2: convert identifier to a TomoBaseObject
assert isinstance(identifier, BaseIdentifier)
scheme = identifier.scheme
tomo_type = identifier.tomo_type
if scheme == "edf":
if tomo_type == VolumeIdentifier.TOMO_TYPE:
return EDFVolume.from_identifier(identifier=identifier)
elif tomo_type == ScanIdentifier.TOMO_TYPE:
return EDFTomoScan.from_identifier(identifier=identifier)
else:
raise NotImplementedError()
elif scheme == "hdf5":
if tomo_type == VolumeIdentifier.TOMO_TYPE:
return HDF5Volume.from_identifier(identifier=identifier)
elif tomo_type == ScanIdentifier.TOMO_TYPE:
return HDF5TomoScan.from_identifier(identifier=identifier)
else:
raise NotImplementedError()
elif scheme == "jp2k":
if tomo_type == VolumeIdentifier.TOMO_TYPE:
return JP2KVolume.from_identifier(identifier=identifier)
else:
raise NotImplementedError
elif scheme == "tiff":
if tomo_type == VolumeIdentifier.TOMO_TYPE:
return TIFFVolume.from_identifier(identifier=identifier)
else:
raise NotImplementedError
elif scheme == "tiff3d":
if tomo_type == VolumeIdentifier.TOMO_TYPE:
return MultiTIFFVolume.from_identifier(identifier=identifier)
else:
raise NotImplementedError
elif scheme == "raw":
if tomo_type == VolumeIdentifier.TOMO_TYPE:
return RawVolume.from_identifier(identifier=identifier)
else:
raise ValueError(f"Scheme {scheme} not recognized")
@staticmethod
def create_scan_object(scan_path: str) -> TomoScanBase:
"""
:param str scan_path: path to the scan directory or file
:return: ScanBase instance fitting the scan folder or scan path
:rtype: TomoScanBase
"""
# remove any final separator (otherwise basename might fail)
scan_path = scan_path.rstrip(os.path.sep)
if EDFTomoScan.is_tomoscan_dir(scan_path):
return EDFTomoScan(scan=scan_path)
elif HDF5TomoScan.is_tomoscan_dir(scan_path):
return HDF5TomoScan(scan=scan_path)
else:
raise ValueError("%s is not a valid scan path" % scan_path)
@staticmethod
def create_scan_objects(scan_path: str) -> tuple:
"""
:param str scan_path: path to the scan directory or file
:return: all possible instances of TomoScanBase contained in the given
path
:rtype: tuple
"""
scan_path = scan_path.rstrip(os.path.sep)
if EDFTomoScan.is_tomoscan_dir(scan_path):
return (EDFTomoScan(scan=scan_path),)
elif HDF5TomoScan.is_tomoscan_dir(scan_path):
scans = []
master_file = HDF5TomoScan.get_master_file(scan_path=scan_path)
entries = HDF5TomoScan.get_valid_entries(master_file)
for entry in entries:
scans.append(HDF5TomoScan(scan=scan_path, entry=entry, index=None))
return tuple(scans)
raise ValueError("%s is not a valid scan path" % scan_path)
@staticmethod
def create_scan_object_frm_dict(_dict: dict) -> TomoScanBase:
"""
Create a TomoScanBase instance from a dictionary. It should contains
the TomoScanBase._DICT_TYPE_KEY key at least.
:param _dict: dictionary to be converted
:return: instance of TomoScanBase
:rtype: TomoScanBase
"""
if TomoScanBase.DICT_TYPE_KEY not in _dict:
raise ValueError(
"given dict is not recognized. Cannot find" "",
TomoScanBase.DICT_TYPE_KEY,
)
elif _dict[TomoScanBase.DICT_TYPE_KEY] == EDFTomoScan._TYPE:
return EDFTomoScan(scan=None).load_from_dict(_dict)
else:
raise ValueError(
f"Scan type: {_dict[TomoScanBase.DICT_TYPE_KEY]} is not managed"
)
@staticmethod
def is_tomoscan_dir(scan_path: str) -> bool:
"""
:param str scan_path: path to the scan directory or file
:return: True if the given path is a root folder of an acquisition.
:rtype: bool
"""
return HDF5TomoScan.is_tomoscan_dir(scan_path) or EDFTomoScan.is_tomoscan_dir(
scan_path
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/framereducerbase.py 0000644 0236253 0006511 00000006747 00000000000 021205 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module for giving information on process progress"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "07/08/2019"
from typing import Optional
from silx.utils.enum import Enum as _Enum
from tomoscan.scanbase import TomoScanBase
from numpy.core.numerictypes import generic as numy_generic
import numpy
class ReduceMethod(_Enum):
MEAN = "mean" # compute the mean of dark / flat frames serie
MEDIAN = "median" # compute the median of dark / flat frames serie
FIRST = "first" # take the first frame of the dark / flat serie
LAST = "last" # take the last frame of the dark / flat serie
NONE = "none"
class REDUCER_TARGET(_Enum):
DARKS = "darks"
FLATS = "flats"
class FrameReducerBase:
def __init__(
self,
scan: TomoScanBase,
reduced_method: ReduceMethod,
target: REDUCER_TARGET,
output_dtype: Optional[numpy.dtype] = None,
overwrite=False,
):
self._reduced_method = ReduceMethod.from_value(reduced_method)
if not isinstance(scan, TomoScanBase):
raise TypeError(
f"{scan} is expected to be an instance of TomoscanBase not {type(scan)}"
)
self._scan = scan
self._reducer_target = REDUCER_TARGET.from_value(target)
if not isinstance(overwrite, bool):
raise TypeError(
f"overwrite is expected to be a boolean not {type(overwrite)}"
)
self._overwrite = overwrite
if output_dtype is not None and not issubclass(output_dtype, numy_generic):
raise TypeError(
f"output_dtype is expected to be None or a numpy.dtype, not {type(output_dtype)}"
)
self._output_dtype = output_dtype
@property
def reduced_method(self) -> ReduceMethod:
return self._reduced_method
@property
def scan(self) -> TomoScanBase:
return self._scan
@property
def reducer_target(self) -> REDUCER_TARGET:
return self._reducer_target
@property
def overwrite(self):
return self._overwrite
@property
def output_dtype(self) -> Optional[numpy.dtype]:
return self._output_dtype
def run(self):
raise NotImplementedError
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1675245838.0
tomoscan-1.2.2/tomoscan/identifier.py 0000644 0236253 0006511 00000004602 00000000000 020014 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "10/01/2022"
class BaseIdentifier:
TOMO_TYPE = None
def __init__(self, object):
self._dataset_builder = object.from_identifier
@property
def tomo_type(self):
return self.TOMO_TYPE
def recreate_object(self):
"""Recreate the dataset from the identifier"""
return self._dataset_builder(self)
def short_description(self) -> str:
"""short description of the identifier"""
return ""
@property
def scheme(self) -> str:
raise NotImplementedError("Base class")
def to_str(self):
return str(self)
@staticmethod
def from_str(identifier):
raise NotImplementedError("base class")
def __eq__(self, __o: object) -> bool:
if isinstance(__o, BaseIdentifier):
return __o.to_str() == self.to_str()
elif isinstance(__o, str):
return __o == self.to_str()
else:
return False
class ScanIdentifier(BaseIdentifier):
TOMO_TYPE = "scan"
class VolumeIdentifier(BaseIdentifier):
TOMO_TYPE = "volume"
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/io.py 0000644 0236253 0006511 00000013616 00000000000 016306 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["W. de Nolf"]
__license__ = "MIT"
__date__ = "25/08/2020"
from contextlib import contextmanager
import logging
import os
import traceback
import errno
import h5py
from tomoscan.utils import SharedLockPool
HASSWMR = h5py.version.hdf5_version_tuple >= h5py.get_config().swmr_min_hdf5_version
_logger = logging.getLogger(__name__)
class HDF5File(h5py.File):
"""File to secure reading and writing within h5py
code originally from bliss.nexus_writer_service.io.nexus
"""
_LOCKPOOL = SharedLockPool()
def __init__(self, filename, mode, enable_file_locking=None, swmr=None, **kwargs):
"""
:param str filename:
:param str mode:
:param bool enable_file_locking: by default it is disabled for `mode=='r'`
and enabled in all other modes
:param bool swmr: when not specified: try both modes when `mode=='r'`
:param **kwargs: see `h5py.File.__init__`
"""
if mode not in ("r", "w", "w-", "x", "a"):
raise ValueError("invalid mode {}".format(mode))
with self._protect_init(filename):
# https://support.hdfgroup.org/HDF5/docNewFeatures/SWMR/Design-HDF5-FileLocking.pdf
if not HASSWMR and swmr:
swmr = False
libver = kwargs.get("libver")
if swmr:
kwargs["libver"] = "latest"
if enable_file_locking is None:
enable_file_locking = mode != "r"
old_file_locking = os.environ.get("HDF5_USE_FILE_LOCKING", None)
if enable_file_locking:
os.environ["HDF5_USE_FILE_LOCKING"] = "TRUE"
else:
os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE"
kwargs["track_order"] = True
try:
super().__init__(filename, mode=mode, swmr=swmr, **kwargs)
if mode != "r" and swmr:
# Try setting writing in SWMR mode
try:
self.swmr_mode = True
except Exception:
pass
except OSError as e:
if (
swmr is not None
or mode != "r"
or not HASSWMR
or not isErrno(e, errno.EAGAIN)
):
raise
# Try reading with opposite SWMR mode
swmr = not swmr
if swmr:
kwargs["libver"] = "latest"
else:
kwargs["libver"] = libver
super().__init__(filename, mode=mode, swmr=swmr, **kwargs)
if old_file_locking is None:
del os.environ["HDF5_USE_FILE_LOCKING"]
else:
os.environ["HDF5_USE_FILE_LOCKING"] = old_file_locking
@contextmanager
def _protect_init(self, filename):
"""Makes sure no other file is opened/created
or protected sections associated to the filename
are executed.
"""
lockname = os.path.abspath(filename)
with self._LOCKPOOL.acquire(None):
with self._LOCKPOOL.acquire(lockname):
yield
@contextmanager
def protect(self):
"""Protected section associated to this file."""
lockname = os.path.abspath(self.filename)
with self._LOCKPOOL.acquire(lockname):
yield
def isErrno(e, errno):
"""
:param OSError e:
:returns bool:
"""
# Because e.__cause__ is None for chained exceptions
return "errno = {}".format(errno) in "".join(traceback.format_exc())
def check_virtual_sources_exist(fname, data_path):
"""
Check that a virtual dataset points to actual data.
:param str fname: HDF5 file path
:param str data_path: Path within the HDF5 file
:return bool res: Whether the virtual dataset points to actual data.
"""
with HDF5File(fname, "r") as f:
if data_path not in f:
_logger.error("No dataset %s in file %s" % (data_path, fname))
return False
dptr = f[data_path]
if not dptr.is_virtual:
return True
for vsource in dptr.virtual_sources():
vsource_fname = os.path.join(
os.path.dirname(dptr.file.filename), vsource.file_name
)
if not os.path.isfile(vsource_fname):
_logger.error("No such file: %s" % vsource_fname)
return False
elif not check_virtual_sources_exist(vsource_fname, vsource.dset_name):
_logger.error("Error with virtual source %s" % vsource_fname)
return False
return True
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.165433
tomoscan-1.2.2/tomoscan/nexus/ 0000755 0236253 0006511 00000000000 00000000000 016460 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1651578901.0
tomoscan-1.2.2/tomoscan/nexus/__init__.py 0000644 0236253 0006511 00000000000 00000000000 020557 0 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.165433
tomoscan-1.2.2/tomoscan/nexus/paths/ 0000755 0236253 0006511 00000000000 00000000000 017577 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1651578901.0
tomoscan-1.2.2/tomoscan/nexus/paths/__init__.py 0000644 0236253 0006511 00000000000 00000000000 021676 0 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1673359176.0
tomoscan-1.2.2/tomoscan/nexus/paths/nxdetector.py 0000644 0236253 0006511 00000001477 00000000000 022341 0 ustar 00payno soft 0000000 0000000 class NEXUS_DETECTOR_PATH:
DATA = "data"
IMAGE_KEY_CONTROL = "image_key_control"
IMAGE_KEY = "image_key"
X_PIXEL_SIZE = "x_pixel_size"
Y_PIXEL_SIZE = "y_pixel_size"
X_PIXEL_SIZE_MAGNIFIED = "x_magnified_pixel_size"
Y_PIXEL_SIZE_MAGNIFIED = "y_magnified_pixel_size"
X_REAL_PIXEL_SIZE = "real_x_pixel_size"
Y_REAL_PIXEL_SIZE = "real_y_pixel_size"
MAGNIFICATION = "magnification"
DISTANCE = "distance"
FOV = "field_of_view"
ESTIMATED_COR_FRM_MOTOR = "estimated_cor_from_motor"
EXPOSURE_TIME = "count_time"
X_FLIPPED = "x_flipped"
Y_FLIPPED = "y_flipped"
class NEXUS_DETECTOR_PATH_V_1_0(NEXUS_DETECTOR_PATH):
pass
class NEXUS_DETECTOR_PATH_V_1_1(NEXUS_DETECTOR_PATH):
pass
class NEXUS_DETECTOR_PATH_V_1_2(NEXUS_DETECTOR_PATH_V_1_1):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1673359176.0
tomoscan-1.2.2/tomoscan/nexus/paths/nxinstrument.py 0000644 0236253 0006511 00000000626 00000000000 022733 0 ustar 00payno soft 0000000 0000000 class NEXUS_INSTRUMENT_PATH:
DETECTOR_PATH = "detector"
DIODE = None
SOURCE = None
BEAM = None
NAME = None
class NEXUS_INSTRUMENT_PATH_V_1_0(NEXUS_INSTRUMENT_PATH):
pass
class NEXUS_INSTRUMENT_PATH_V_1_1(NEXUS_INSTRUMENT_PATH_V_1_0):
SOURCE = "source"
BEAM = "beam"
NAME = "name"
class NEXUS_INSTRUMENT_PATH_V_1_2(NEXUS_INSTRUMENT_PATH_V_1_1):
DIODE = "diode"
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1675245838.0
tomoscan-1.2.2/tomoscan/nexus/paths/nxmonitor.py 0000644 0236253 0006511 00000000372 00000000000 022210 0 ustar 00payno soft 0000000 0000000 class NEXUS_MONITOR_PATH:
DATA_PATH = "data"
class NEXUS_MONITOR_PATH_V_1_0(NEXUS_MONITOR_PATH):
pass
class NEXUS_MONITOR_PATH_V_1_1(NEXUS_MONITOR_PATH_V_1_0):
pass
class NEXUS_MONITOR_PATH_V_1_2(NEXUS_MONITOR_PATH_V_1_1):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1673359176.0
tomoscan-1.2.2/tomoscan/nexus/paths/nxsample.py 0000644 0236253 0006511 00000001034 00000000000 021776 0 ustar 00payno soft 0000000 0000000 class NEXUS_SAMPLE_PATH:
NAME = "sample_name"
ROTATION_ANGLE = "rotation_angle"
X_TRANSLATION = "x_translation"
Y_TRANSLATION = "y_translation"
Z_TRANSLATION = "z_translation"
ROCKING = "rocking"
BASE_TILT = "base_tilt"
N_STEPS_ROCKING = "n_step_rocking"
N_STEPS_ROTATION = "n_step_rotation"
class NEXUS_SAMPLE_PATH_V_1_0(NEXUS_SAMPLE_PATH):
pass
class NEXUS_SAMPLE_PATH_V_1_1(NEXUS_SAMPLE_PATH_V_1_0):
NAME = "name"
class NEXUS_SAMPLE_PATH_V_1_2(NEXUS_SAMPLE_PATH_V_1_1):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1673359176.0
tomoscan-1.2.2/tomoscan/nexus/paths/nxsource.py 0000644 0236253 0006511 00000000424 00000000000 022017 0 ustar 00payno soft 0000000 0000000 class NEXUS_SOURCE_PATH:
NAME = "name"
TYPE = "type"
PROBE = "probe"
class NEXUS_SOURCE_PATH_V_1_0(NEXUS_SOURCE_PATH):
pass
class NEXUS_SOURCE_PATH_V_1_1(NEXUS_SOURCE_PATH_V_1_0):
pass
class NEXUS_SOURCE_PATH_V_1_2(NEXUS_SOURCE_PATH_V_1_1):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/nexus/paths/nxtomo.py 0000644 0236253 0006511 00000025700 00000000000 021501 0 ustar 00payno soft 0000000 0000000 from typing import Optional
from . import nxdetector
from . import nxinstrument
from . import nxsample
from . import nxsource
from . import nxmonitor
from silx.utils.deprecation import deprecated
import logging
import tomoscan
_logger = logging.getLogger(__name__)
LATEST_VERSION = 1.2
class NXtomo_PATH:
# list all path that can be used by an nxtomo entry and read by tomoscan.
# this is also used by nxtomomill to know were to save data
_NX_DETECTOR_PATHS = None
_NX_INSTRUMENT_PATHS = None
_NX_SAMPLE_PATHS = None
_NX_SOURCE_PATHS = None
_NX_CONTROL_PATHS = None
VERSION = None
@property
def nx_detector_paths(self):
return self._NX_DETECTOR_PATHS
@property
def nx_instrument_paths(self):
return self._NX_INSTRUMENT_PATHS
@property
def nx_sample_paths(self):
return self._NX_SAMPLE_PATHS
@property
def nx_source_paths(self):
return self._NX_SOURCE_PATHS
@property
def nx_monitor_paths(self):
return self._NX_CONTROL_PATHS
@property
def PROJ_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.DATA,
]
)
@property
def SCAN_META_PATH(self) -> str:
# for now scan_meta and technique are not link to any nxtomo...
return "scan_meta/technique/scan"
@property
def INSTRUMENT_PATH(self) -> str:
return "instrument"
@property
def CONTROL_PATH(self) -> str:
return "control"
@property
def DET_META_PATH(self) -> str:
return "scan_meta/technique/detector"
@property
def ROTATION_ANGLE_PATH(self):
return "/".join(["sample", self.nx_sample_paths.ROTATION_ANGLE])
@property
def SAMPLE_PATH(self) -> str:
return "sample"
@property
def NAME_PATH(self) -> str:
return "sample/name"
@property
def GRP_SIZE_ATTR(self) -> str:
return "group_size"
@property
def SAMPLE_NAME_PATH(self) -> str:
return "/".join([self.SAMPLE_PATH, self.nx_sample_paths.NAME])
@property
def X_TRANS_PATH(self) -> str:
return "/".join([self.SAMPLE_PATH, self.nx_sample_paths.X_TRANSLATION])
@property
def Y_TRANS_PATH(self) -> str:
return "/".join([self.SAMPLE_PATH, self.nx_sample_paths.Y_TRANSLATION])
@property
def Z_TRANS_PATH(self) -> str:
return "/".join([self.SAMPLE_PATH, self.nx_sample_paths.Z_TRANSLATION])
@property
def IMG_KEY_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.IMAGE_KEY,
]
)
@property
def IMG_KEY_CONTROL_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.IMAGE_KEY_CONTROL,
]
)
@property
def X_PIXEL_SIZE_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.X_PIXEL_SIZE,
]
)
@property
def Y_PIXEL_SIZE_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.Y_PIXEL_SIZE,
]
)
@property
def X_REAL_PIXEL_SIZE_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.X_REAL_PIXEL_SIZE,
]
)
@property
def Y_REAL_PIXEL_SIZE_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.Y_REAL_PIXEL_SIZE,
]
)
@property
@deprecated(replacement="X_PIXEL_SIZE_PATH", since_version="1.1.0")
def X_PIXEL_MAG_SIZE_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.X_PIXEL_SIZE_MAGNIFIED,
]
)
@property
@deprecated(replacement="Y_PIXEL_SIZE_PATH", since_version="1.1.0")
def Y_PIXEL_MAG_SIZE_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.Y_PIXEL_SIZE_MAGNIFIED,
]
)
@property
def DISTANCE_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.DISTANCE,
]
)
@property
def FOV_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.FOV,
]
)
@property
def EXPOSURE_TIME_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.EXPOSURE_TIME,
]
)
@property
def ELECTRIC_CURRENT_PATH(self) -> str:
return "/".join(
[
self.CONTROL_PATH,
self.nx_monitor_paths.DATA_PATH,
]
)
@property
def ESTIMATED_COR_FRM_MOTOR_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DETECTOR_PATH,
self.nx_detector_paths.ESTIMATED_COR_FRM_MOTOR,
]
)
@property
def TOMO_N_SCAN(self) -> str:
return "/".join(
[self.INSTRUMENT_PATH, self.nx_instrument_paths.DETECTOR_PATH, "tomo_n"]
)
@property
def BEAM_PATH(self) -> str:
return "beam"
@property
def ENERGY_PATH(self) -> str:
return f"{self.BEAM_PATH}/incident_energy"
@property
def START_TIME_PATH(self) -> str:
return "start_time"
@property
def END_TIME_PATH(self) -> str:
return "end_time"
@property
@deprecated(replacement="END_TIME_PATH", reason="typo", since_version="0.8.0")
def END_TIME_START(self) -> str:
return self.END_TIME_PATH
@property
def INTENSITY_MONITOR_PATH(self) -> str:
return "diode/data"
@property
@deprecated(
replacement="", reason="will be removed. Not used", since_version="0.8.0"
)
def EPSILON_ROT_ANGLE(self) -> float:
return 0.02
@property
def SOURCE_NAME(self) -> Optional[str]:
return None
@property
def SOURCE_TYPE(self) -> Optional[str]:
return None
@property
def SOURCE_PROBE(self) -> Optional[str]:
return None
@property
def INSTRUMENT_NAME(self) -> Optional[str]:
return None
@property
def ROCKING_PATH(self) -> str:
return "/".join([self.SAMPLE_PATH, self.nx_sample_paths.ROCKING])
@property
def BASE_TILT_PATH(self) -> str:
return "/".join([self.SAMPLE_PATH, self.nx_sample_paths.BASE_TILT])
class NXtomo_PATH_v_1_0(NXtomo_PATH):
VERSION = 1.0
_NX_DETECTOR_PATHS = nxdetector.NEXUS_DETECTOR_PATH_V_1_0
_NX_INSTRUMENT_PATHS = nxinstrument.NEXUS_INSTRUMENT_PATH_V_1_0
_NX_SAMPLE_PATHS = nxsample.NEXUS_SAMPLE_PATH_V_1_0
_NX_SOURCE_PATHS = nxsource.NEXUS_SOURCE_PATH_V_1_0
_NX_CONTROL_PATHS = nxmonitor.NEXUS_MONITOR_PATH_V_1_1
nx_tomo_path_v_1_0 = NXtomo_PATH_v_1_0()
class NXtomo_PATH_v_1_1(NXtomo_PATH_v_1_0):
VERSION = 1.1
_NX_DETECTOR_PATHS = nxdetector.NEXUS_DETECTOR_PATH_V_1_1
_NX_INSTRUMENT_PATHS = nxinstrument.NEXUS_INSTRUMENT_PATH_V_1_1
_NX_SAMPLE_PATHS = nxsample.NEXUS_SAMPLE_PATH_V_1_1
_NX_SOURCE_PATHS = nxsource.NEXUS_SOURCE_PATH_V_1_1
@property
def NAME_PATH(self) -> str:
return "title"
@property
def BEAM_PATH(self) -> str:
return "/".join([self.INSTRUMENT_PATH, self.nx_instrument_paths.BEAM])
@property
def SOURCE_NAME(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.SOURCE,
self.nx_source_paths.NAME,
]
)
@property
def SOURCE_TYPE(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.SOURCE,
self.nx_source_paths.TYPE,
]
)
@property
def SOURCE_PROBE(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.SOURCE,
self.nx_source_paths.PROBE,
]
)
@property
def INSTRUMENT_NAME(self) -> str:
return "/".join([self.INSTRUMENT_PATH, self.nx_instrument_paths.NAME])
nx_tomo_path_v_1_1 = NXtomo_PATH_v_1_1()
class NXtomo_PATH_v_1_2(NXtomo_PATH_v_1_1):
VERSION = 1.2
_NX_DETECTOR_PATHS = nxdetector.NEXUS_DETECTOR_PATH_V_1_2
_NX_INSTRUMENT_PATHS = nxinstrument.NEXUS_INSTRUMENT_PATH_V_1_2
_NX_SAMPLE_PATHS = nxsample.NEXUS_SAMPLE_PATH_V_1_2
_NX_SOURCE_PATHS = nxsource.NEXUS_SOURCE_PATH_V_1_2
@property
def INTENSITY_MONITOR_PATH(self) -> str:
return "/".join(
[
self.INSTRUMENT_PATH,
self.nx_instrument_paths.DIODE,
self.nx_detector_paths.DATA,
]
)
nx_tomo_path_v_1_2 = NXtomo_PATH_v_1_2()
nx_tomo_path_latest = nx_tomo_path_v_1_2
def get_paths(version: Optional[float]) -> NXtomo_PATH:
if version is None:
version = LATEST_VERSION
_logger.warning(
f"version of the NXtomo not found. Will take the latest one ({LATEST_VERSION})"
)
versions_dict = {
# Ensure compatibility with "old" datasets (acquired before Dec. 2021).
# Tomoscan can still parse them provided that nx_version=1.0 is forced at init.
0.0: nx_tomo_path_v_1_0,
0.1: nx_tomo_path_v_1_0,
#
1.0: nx_tomo_path_v_1_0,
1.1: nx_tomo_path_v_1_1,
1.2: nx_tomo_path_v_1_2,
}
if version not in versions_dict:
if int(version) == 1:
_logger.warning(
f"nexus path {version} requested but unknow from this version of tomoscan {tomoscan.__version__}. Pick latest one of this major version. You might miss some information"
)
version = LATEST_VERSION
else:
raise ValueError(f"Unknow major version of the nexus path ({version})")
return versions_dict[version]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/normalization.py 0000644 0236253 0006511 00000012512 00000000000 020557 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""
material for radio and sinogram normalization
"""
__authors__ = [
"H. Payno",
]
__license__ = "MIT"
__date__ = "25/06/2021"
from silx.utils.enum import Enum as _Enum
import typing
import numpy
import logging
_logger = logging.getLogger(__name__)
class Method(_Enum):
NONE = "none"
SUBTRACTION = "subtraction"
DIVISION = "division"
CHEBYSHEV = "chebyshev"
LSQR_SPLINE = "lsqr spline"
class _MethodMode(_Enum):
SCALAR = "scalar"
POLYNOMIAL_FIT = "polynomial fit"
class _ValueCalculationMode(_Enum):
MEAN = "mean"
MEDIAN = "median"
class _DatasetScope(_Enum):
LOCAL = "local"
GLOBAL = "global"
class _DatasetInfos:
def __init__(self):
self._scope = _DatasetScope.GLOBAL
self._file_path = None
self._data_path = None
@property
def scope(self) -> _DatasetScope:
return self._scope
@scope.setter
def scope(self, scope: typing.Union[str, _DatasetScope]):
self._scope = _DatasetScope.from_value(scope)
@property
def file_path(self):
return self._file_path
@file_path.setter
def file_path(self, file_path):
self._file_path = file_path
@property
def data_path(self):
return self._data_path
@data_path.setter
def data_path(self, data_path: str):
self._data_path = data_path
class _ROIInfo:
def __init__(self, x_min=None, x_max=None, y_min=None, y_max=None):
self.x_min = x_min
self.x_max = x_max
self.y_min = y_min
self.y_max = y_max
class IntensityNormalization:
"""Information regarding the intensity normalization to be done"""
def __init__(self):
self._method = Method.NONE
self._extra_info = {}
@property
def method(self):
return self._method
@method.setter
def method(self, method: typing.Union[str, Method, None]):
if method is None:
method = Method.NONE
self._method = Method.from_value(method)
def set_extra_infos(self, info: typing.Union[dict, _DatasetInfos, _ROIInfo]):
if info is None:
self._extra_info = None
elif not isinstance(info, (_DatasetInfos, _ROIInfo, dict)):
raise TypeError(
"info is expected to be an instance of _DatasetInfos or _ROIInfo"
)
else:
self._extra_info = info
def get_extra_infos(self) -> typing.Union[dict, _DatasetInfos, _ROIInfo]:
return self._extra_info
def to_dict(self) -> dict:
res = {
"method": self.method.value,
}
if self._extra_info not in (None, {}):
res["extra_infos"] = self.get_extra_infos()
return res
def load_from_dict(self, dict_):
if "method" in dict_:
self.method = dict_["method"]
if "extra_infos" in dict_:
self.set_extra_infos(dict_["extra_infos"])
return self
@staticmethod
def from_dict(dict_):
res = IntensityNormalization()
res.load_from_dict(dict_)
return res
def __str__(self):
return "method: {}, extra-infos: {}".format(self.method, self.get_extra_infos())
def normalize_chebyshev_2D(sino):
Nr, Nc = sino.shape
J = numpy.arange(Nc)
x = 2.0 * (J + 0.5 - Nc / 2) / Nc
sum0 = Nc
f2 = 3.0 * x * x - 1.0
sum1 = (x**2).sum()
sum2 = (f2**2).sum()
for i in range(Nr):
ff0 = sino[i, :].sum()
ff1 = (x * sino[i, :]).sum()
ff2 = (f2 * sino[i, :]).sum()
sino[i, :] = sino[i, :] - (ff0 / sum0 + ff1 * x / sum1 + ff2 * f2 / sum2)
return sino
def normalize_lsqr_spline_2D(sino):
try:
from scipy.interpolate import splev, splrep
except ImportError:
_logger.error("You should install scipy to do the lsqr spline " "normalization")
return None
Nr, Nc = sino.shape
# correction = numpy.zeros_like(sino)
for i in range(Nr):
line = sino[i, :]
spline = splrep(range(len(line)), sino[i, :], k=1)
correct = splev(range(len(line)), spline)
sino[i, :] = line - correct
return sino
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/progress.py 0000644 0236253 0006511 00000006263 00000000000 017543 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module for giving information on process progress"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "07/08/2019"
import sys
from enum import Enum
import logging
_logger = logging.getLogger(__name__)
class _Advancement(Enum):
step_1 = "\\"
step_2 = "-"
step_3 = "/"
step_4 = "|"
@staticmethod
def getNextStep(step):
if step is _Advancement.step_1:
return _Advancement.step_2
elif step is _Advancement.step_2:
return _Advancement.step_3
elif step is _Advancement.step_3:
return _Advancement.step_4
else:
return _Advancement.step_1
@staticmethod
def getStep(value):
if value % 4 == 0:
return _Advancement.step_4
elif value % 3 == 0:
return _Advancement.step_3
elif value % 2 == 0:
return _Advancement.step_2
else:
return _Advancement.step_1
class Progress(object):
"""Simple interface for defining advancement on a 100 percentage base"""
def __init__(self, name):
self._name = name
self.reset()
def reset(self, max_=None):
self._nProcessed = 0
self._maxProcessed = max_
def startProcess(self):
self.setAdvancement(0)
def setAdvancement(self, value):
length = 20 # modify this to change the length
block = int(round(length * value / 100))
msg = "\r{0}: [{1}] {2}%".format(
self._name, "#" * block + "-" * (length - block), round(value, 2)
)
if value >= 100:
msg += " DONE\r\n"
sys.stdout.write(msg)
sys.stdout.flush()
def endProcess(self):
self.setAdvancement(100)
def setMaxAdvancement(self, n):
self._maxProcessed = n
def increaseAdvancement(self, i=1):
self._nProcessed += i
self.setAdvancement((self._nProcessed / self._maxProcessed) * 100)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/scanbase.py 0000644 0236253 0006511 00000201420 00000000000 017446 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016- 2020 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
"""This modules contains base class for TomoScanBase"""
__authors__ = ["H.Payno"]
__license__ = "MIT"
__date__ = "09/10/2019"
import fabio
import os
import typing
import logging
import h5py
import numpy
from typing import Union, Iterable, Optional
from collections import OrderedDict
import pathlib
from tomoscan.identifier import ScanIdentifier
from tomoscan.io import HDF5File
from tomoscan.unitsystem.electriccurrentsystem import ElectricCurrentSystem
from tomoscan.unitsystem.timesystem import TimeSystem
from .unitsystem.metricsystem import MetricSystem
from silx.utils.enum import Enum as _Enum
from silx.io.url import DataUrl
from silx.io.utils import get_data
import silx.io.utils
from math import ceil
from .progress import Progress
from bisect import bisect_left
from tomoscan.normalization import (
IntensityNormalization,
Method as _IntensityMethod,
normalize_chebyshev_2D,
normalize_lsqr_spline_2D,
)
from silx.utils.deprecation import deprecated
from .tomoobject import TomoObject
_logger = logging.getLogger(__name__)
class FOV(_Enum):
"""Possible existing field of view"""
@classmethod
def from_value(cls, value):
if isinstance(value, str):
value = value.lower().title()
return super().from_value(value)
FULL = "Full"
HALF = "Half"
# keep compatibility for some time
_FOV = FOV
class SourceType(_Enum):
SPALLATION_NEUTRON = "Spallation Neutron Source"
PULSED_REACTOR_NEUTRON_SOURCE = "Pulsed Reactor Neutron Source"
REACTOR_NEUTRON_SOURCE = "Reactor Neutron Source"
SYNCHROTRON_X_RAY_SOURCE = "Synchrotron X-ray Source"
PULSED_MUON_SOURCE = "Pulsed Muon Source"
ROTATING_ANODE_X_RAY = "Rotating Anode X-ray"
FIXED_TUBE_X_RAY = "Fixed Tube X-ray"
UV_LASER = "UV Laser"
FREE_ELECTRON_LASER = "Free-Electron Laser"
OPTICAL_LASER = "Optical Laser"
ION_SOURCE = "Ion Source"
UV_PLASMA_SOURCE = "UV Plasma Source"
METAL_JET_X_RAY = "Metal Jet X-ray"
class Source:
"""Information regarding the x-ray storage ring/facility"""
def __init__(self, name=None, type=None):
self._name = name
self._type = type
@property
def name(self) -> Union[None, str]:
return self._name
@name.setter
def name(self, name: Union[str, None]):
if not isinstance(name, (str, type(None))):
raise TypeError("name is expected to be None or a str")
self._name = name
@property
def type(self) -> Union[None, SourceType]:
return self._type
@type.setter
def type(self, type_: Union[None, str, SourceType]):
if type_ is None:
self._type = None
else:
type_ = SourceType.from_value(type_)
self._type = type_
def __str__(self):
return f"source (name: {self.name}, type: {self.type})"
class ComputeMethod(_Enum):
MEAN = "mean" # compute the mean of dark / flat frames serie
MEDIAN = "median" # compute the median of dark / flat frames serie
FIRST = "first" # take the first frame of the dark / flat serie
LAST = "last" # take the last frame of the dark / flat serie
class ReducedFramesInfos:
"""contains reduced frames metadata as count_time and machine_electric_current"""
MACHINE_ELECT_CURRENT_KEY = "machine_electric_current"
COUNT_TIME_KEY = "count_time"
def __init__(self) -> None:
self._count_time = []
self._machine_electric_current = []
def __eq__(self, __o: object) -> bool:
if isinstance(__o, dict):
return ReducedFramesInfos().load_from_dict(__o) == self
if not isinstance(__o, ReducedFramesInfos):
return False
return numpy.array_equal(
numpy.array(self.count_time), numpy.array(__o.count_time)
) and numpy.array_equal(
numpy.array(self.machine_electric_current),
numpy.array(__o.machine_electric_current),
)
def clear(self):
self._count_time.clear()
self._machine_electric_current.clear()
@property
def count_time(self) -> list:
"""
frame exposure time in second
"""
return self._count_time
@count_time.setter
def count_time(self, count_time: Optional[Iterable]):
if count_time is None:
self._count_time.clear()
else:
self._count_time = list(count_time)
@property
def machine_electric_current(self) -> list:
"""
machine electric current in Ampere
"""
return self._machine_electric_current
@machine_electric_current.setter
def machine_electric_current(self, machine_electric_current: Optional[Iterable]):
if machine_electric_current is None:
self._machine_electric_current.clear()
else:
self._machine_electric_current = list(machine_electric_current)
def to_dict(self) -> dict:
res = {}
if len(self.machine_electric_current) > 0:
res[self.MACHINE_ELECT_CURRENT_KEY] = self.machine_electric_current
if len(self.count_time) > 0:
res[self.COUNT_TIME_KEY] = self.count_time
return res
def load_from_dict(self, my_dict: dict):
self.machine_electric_current = my_dict.get(
self.MACHINE_ELECT_CURRENT_KEY, None
)
self.count_time = my_dict.get(self.COUNT_TIME_KEY, None)
return self
@staticmethod
def pop_info_keys(my_dict: dict):
if not isinstance(my_dict, dict):
raise TypeError
my_dict.pop(ReducedFramesInfos.MACHINE_ELECT_CURRENT_KEY, None)
my_dict.pop(ReducedFramesInfos.COUNT_TIME_KEY, None)
return my_dict
@staticmethod
def split_data_and_metadata(my_dict):
metadata = ReducedFramesInfos().load_from_dict(my_dict)
data = ReducedFramesInfos.pop_info_keys(my_dict)
return data, metadata
class TomoScanBase(TomoObject):
"""
Base Class representing a scan.
It is used to obtain core information regarding an aquisition like
projections, dark and flat field...
:param scan: path to the root folder containing the scan.
:type scan: Union[str,None]
"""
DICT_TYPE_KEY = "type"
DICT_PATH_KEY = "path"
_SCHEME = None
"""scheme to read data url for this type of acquisition"""
FRAME_REDUCER_CLASS = None
"""Frame reducer class to be use in order to compute reduced darks and reduced flats"""
def __init__(
self,
scan: Union[None, str],
type_: str,
ignore_projections: Union[None, Iterable] = None,
):
super().__init__()
self.path = scan
self._type = type_
self._reduced_flats = None
"""darks once reduced. We must have one per serie. When set a dict is expected with index as the key
and median or median of darks serie as value"""
self._reduced_flats_infos = ReducedFramesInfos()
self._reduced_darks = None
"""flats once reduced. We must have one per serie. When set a dict is expected with index as the key
and median or median of darks serie as value"""
self._reduced_darks_infos = ReducedFramesInfos()
self._notify_ffc_rsc_missing = True
"""Should we notify the user if ffc fails because cannot find dark or
flat. Used to avoid several warnings. Only display one"""
self._projections = None
self._alignment_projections = None
self._flats_weights = None
"""list flats indexes to use for flat field correction and associate
weights"""
self.ignore_projections = ignore_projections
"""Extra information for normalization"""
self._intensity_monitor = None
"""monitor of the intensity during acquisition. Can be a diode
for example"""
self._source = None
self._intensity_normalization = IntensityNormalization()
"""Extra information for normalization"""
self._electric_current = None
self._count_time = None
def clear_caches(self):
"""clear caches. Might be call if some data changed after
first read of data or metadata"""
self._notify_ffc_rsc_missing = True
self.clear_frames_caches()
def clear_frames_caches(self):
self._alignment_projections = None
self._flats_weights = None
self._projections = None
@property
@deprecated(replacement="reduced_darks", since_version="1.0.0")
def normed_darks(self):
return self.reduced_darks
@deprecated(replacement="set_reduced_darks", since_version="1.0.0")
def set_normed_darks(self, darks, darks_infos=None):
self.set_reduced_darks(darks=darks, darks_infos=darks_infos)
@property
@deprecated(replacement="reduced_flats", since_version="1.0.0")
def normed_flats(self):
return self.reduced_flats
@deprecated(replacement="set_reduced_flats", since_version="1.0.0")
def set_normed_flats(self, flats, flats_infos=None):
self.set_reduced_flats(flats=flats, flats_infos=flats_infos)
@property
def reduced_darks(self):
return self._reduced_darks
def set_reduced_darks(
self, darks, darks_infos: Union[None, ReducedFramesInfos, dict] = None
):
self._reduced_darks = darks
self.reduced_darks_infos = darks_infos
@property
def reduced_flats(self):
return self._reduced_flats
def set_reduced_flats(
self, flats, flats_infos: Union[None, ReducedFramesInfos, dict] = None
):
self._reduced_flats = flats
self.reduced_flats_infos = flats_infos
@property
def reduced_darks_infos(self):
return self._reduced_darks_infos
@reduced_darks_infos.setter
def reduced_darks_infos(self, infos: Union[None, ReducedFramesInfos, dict]):
if infos is None:
self._reduced_darks_infos.clear()
elif isinstance(infos, ReducedFramesInfos):
self._reduced_darks_infos = infos
elif isinstance(infos, dict):
self._reduced_darks_infos.load_from_dict(dict)
else:
raise TypeError
@property
def reduced_flats_infos(self):
return self._reduced_flats_infos
@reduced_flats_infos.setter
def reduced_flats_infos(self, infos: Union[None, ReducedFramesInfos, dict]):
if infos is None:
self._reduced_flats_infos.clear()
elif isinstance(infos, ReducedFramesInfos):
self._reduced_flats_infos = infos
elif isinstance(infos, dict):
self._reduced_flats_infos.load_from_dict(dict)
else:
raise TypeError(f"unexpected error ({type(infos)})")
@property
def path(self) -> Union[None, str]:
"""
:return: path of the scan root folder.
:rtype: Union[str,None]
"""
return self._path
@path.setter
def path(self, path: Union[str, None]) -> None:
if path is None:
self._path = path
else:
if not isinstance(path, (str, pathlib.Path)):
raise TypeError(
f"path is expected to be a str or a pathlib.Path not {type(path)}"
)
self._path = os.path.realpath(str(path))
@property
def type(self) -> str:
"""
:return: type of the scanBase (can be 'edf' or 'hdf5' for now).
:rtype: str
"""
return self._type
@staticmethod
def is_tomoscan_dir(directory: str, **kwargs) -> bool:
"""
Check if the given directory is holding an acquisition
:param str directory:
:return: does the given directory contains any acquisition
:rtype: bool
"""
raise NotImplementedError("Base class")
def is_abort(self, **kwargs) -> bool:
"""
:return: True if the acquisition has been abort
:rtype: bool
"""
raise NotImplementedError("Base class")
@property
def source(self):
return self._source
@property
def flats(self) -> Union[None, dict]:
"""list of flats files"""
return self._flats
@flats.setter
def flats(self, flats: Union[None, dict]) -> None:
self._flats = flats
@property
def darks(self) -> Union[None, dict]:
"""list of darks files"""
return self._darks
@darks.setter
def darks(self, darks: Union[None, dict]) -> None:
self._darks = darks
@property
def projections(self) -> Union[None, dict]:
"""if found dict of projections urls with index during acquisition as
key"""
return self._projections
@projections.setter
def projections(self, projections: dict) -> None:
self._projections = projections
@property
def alignment_projections(self) -> Union[None, dict]:
"""
dict of projections made for alignment with acquisition index as key
None if not found
"""
return self._alignment_projections
@alignment_projections.setter
def alignment_projections(self, alignment_projs):
self._alignment_projections = alignment_projs
@property
def dark_n(self) -> Union[None, int]:
raise NotImplementedError("Base class")
@property
def tomo_n(self) -> Union[None, int]:
"""number of projection WITHOUT the return projections"""
raise NotImplementedError("Base class")
@property
def flat_n(self) -> Union[None, int]:
raise NotImplementedError("Base class")
@property
def pixel_size(self) -> Union[None, float]:
raise NotImplementedError("Base class")
@property
@deprecated(replacement="", since_version="1.1.0")
def x_real_pixel_size(self) -> Union[None, float]:
raise NotImplementedError("Base class")
@property
@deprecated(replacement="", since_version="1.1.0")
def y_real_pixel_size(self) -> Union[None, float]:
raise NotImplementedError("Base class")
def get_pixel_size(self, unit="m") -> Union[None, float]:
if self.pixel_size:
return self.pixel_size / MetricSystem.from_value(unit).value
else:
return None
@property
def instrument_name(self) -> Union[None, str]:
"""
:return: instrument name
"""
raise NotImplementedError("Base class")
@property
def dim_1(self) -> Union[None, int]:
raise NotImplementedError("Base class")
@property
def dim_2(self) -> Union[None, int]:
raise NotImplementedError("Base class")
@property
def ff_interval(self) -> Union[None, int]:
raise NotImplementedError("Base class")
@property
def scan_range(self) -> Union[None, int]:
raise NotImplementedError("Base class")
@property
def energy(self) -> Union[None, float]:
"""
:return: incident beam energy in keV
"""
raise NotImplementedError("Base class")
@property
def intensity_monitor(self):
raise NotImplementedError("Base class")
@property
def distance(self) -> Union[None, float]:
"""
:return: sample / detector distance in meter
"""
raise NotImplementedError("Base class")
@property
def field_of_view(self):
"""
:return: field of view of the scan. None if unknow else Full or Half
"""
raise NotImplementedError("Base class")
@property
def estimated_cor_frm_motor(self):
"""
:return: Estimated center of rotation estimated from motor position
:rtype: Union[None, float]. If return value is in [-frame_width, +frame_width]
"""
raise NotImplementedError("Base class")
@property
def x_translation(self) -> typing.Union[None, tuple]:
raise NotImplementedError("Base class")
@property
def y_translation(self) -> typing.Union[None, tuple]:
raise NotImplementedError("Base class")
@property
def z_translation(self) -> typing.Union[None, tuple]:
raise NotImplementedError("Base class")
def get_distance(self, unit="m") -> Union[None, float]:
"""
:param Union[MetricSystem, str] unit: unit requested for the distance
:return: sample / detector distance with the requested unit
"""
if self.distance:
return self.distance / MetricSystem.from_value(unit).value
else:
return None
@property
def x_pixel_size(self) -> Optional[float]:
raise NotImplementedError("Base class")
@property
def y_pixel_size(self) -> Optional[float]:
raise NotImplementedError("Base class")
@property
def magnification(self) -> Optional[float]:
raise NotImplementedError("Base class")
def update(self) -> None:
"""Parse the root folder and files to update informations"""
raise NotImplementedError("Base class")
@property
def sequence_name(self):
"""Return the sequence name"""
raise NotImplementedError("Base class")
@property
def sample_name(self):
"""Return the sample name"""
raise NotImplementedError("Base class")
@property
def group_size(self):
"""Used in the case of zseries for example. Return the number of
sequence expected on the acquisition"""
raise NotImplementedError("Base class")
@property
def count_time(self) -> typing.Union[list, None]:
raise NotImplementedError("Base class")
@property
def electric_current(self) -> tuple:
"""Return the sample name"""
raise NotImplementedError("Base class")
@electric_current.setter
def electric_current(self, current: Optional[tuple]) -> None:
if not isinstance(current, (type(None), tuple)):
raise TypeError(
f"current is expected to be None or a tuple. Not {type(current)}"
)
self._electric_current = current
@property
def x_flipped(self) -> bool:
"""
return True if the frames are flip through x
"""
raise NotImplementedError("Base class")
@property
def y_flipped(self) -> bool:
"""
return True if the frames are flip through y
"""
raise NotImplementedError("Base class")
def get_x_flipped(self, default=None):
if self.x_flipped is None:
return default
else:
return self.x_flipped
def get_y_flipped(self, default=None):
if self.y_flipped is None:
return default
else:
return self.y_flipped
def get_identifier(self) -> ScanIdentifier:
"""
return the dataset identifier of the scan.
The identifier is insure to be unique for each scan and
allow the user to store the scan as a string identifier
and to retrieve it later from this single identifier.
"""
raise NotImplementedError("Base class")
def to_dict(self) -> dict:
"""
:return: convert the TomoScanBase object to a dictionary.
Used to serialize the object for example.
:rtype: dict
"""
res = dict()
res[self.DICT_TYPE_KEY] = self.type
res[self.DICT_PATH_KEY] = self.path
return res
def load_from_dict(self, _dict: dict):
"""
Load properties contained in the dictionnary.
:param _dict: dictionary to load
:type _dict: dict
:return: self
:raises: ValueError if dict is invalid
"""
raise NotImplementedError("Base class")
def equal(self, other) -> bool:
"""
:param :class:`.ScanBase` other: instance to compare with
:return: True if instance are equivalent
..note:: we cannot use the __eq__ function because this object need to be
pickable
"""
return (
isinstance(other, self.__class__)
or isinstance(self, other.__class__)
and self.type == other.type
and self.path == other.path
)
def get_proj_angle_url(self) -> dict:
"""
return a dictionary of all the projection. key is the angle of the
projection and value is the url.
Keys are int for 'standard' projections and strings for return
projections.
:return dict: angles as keys, radios as value.
"""
raise NotImplementedError("Base class")
@staticmethod
def map_urls_on_scan_range(urls, n_projection, scan_range) -> dict:
"""
map given urls to an angle regarding scan_range and number of projection.
We take the hypothesis that 'extra projection' are taken regarding the
'id19' policy:
* If the acquisition has a scan range of 360 then:
* if 4 extra projection, the angles are (270, 180, 90, 0)
* if 5 extra projection, the angles are (360, 270, 180, 90, 0)
* If the acquisition has a scan range of 180 then:
* if 2 extra projections: the angles are (90, 0)
* if 3 extra projections: the angles are (180, 90, 0)
..warning:: each url should contain only one radio.
:param urls: dict with all the urls. First url should be
the first radio acquire, last url should match the last
radio acquire.
:type urls: dict
:param n_projection: number of projection for the sample.
:type n_projection: int
:param scan_range: acquisition range (usually 180 or 360)
:type scan_range: float
:return: angle in degree as key and url as value
:rtype: dict
:raises: ValueError if the number of extra images found and scan_range
are incoherent
"""
assert n_projection is not None
ordered_url = OrderedDict(sorted(urls.items(), key=lambda x: x))
res = {}
# deal with the 'standard' acquisitions
for proj_i in range(n_projection):
url = list(ordered_url.values())[proj_i]
if n_projection == 1:
angle = 0.0
else:
angle = proj_i * scan_range / (n_projection - 1)
if proj_i < len(urls):
res[angle] = url
if len(urls) > n_projection:
# deal with extra images (used to check if the sampled as moved for
# example)
extraImgs = list(ordered_url.keys())[n_projection:]
if len(extraImgs) in (4, 5):
if scan_range < 360:
_logger.warning(
"incoherent data information to retrieve"
"scan extra images angle"
)
elif len(extraImgs) == 4:
res["270(1)"] = ordered_url[extraImgs[0]]
res["180(1)"] = ordered_url[extraImgs[1]]
res["90(1)"] = ordered_url[extraImgs[2]]
res["0(1)"] = ordered_url[extraImgs[3]]
else:
res["360(1)"] = ordered_url[extraImgs[0]]
res["270(1)"] = ordered_url[extraImgs[1]]
res["180(1)"] = ordered_url[extraImgs[2]]
res["90(1)"] = ordered_url[extraImgs[3]]
res["0(1)"] = ordered_url[extraImgs[4]]
elif len(extraImgs) in (2, 3):
if scan_range > 180:
_logger.warning(
"incoherent data information to retrieve"
"scan extra images angle"
)
elif len(extraImgs) == 3:
res["180(1)"] = ordered_url[extraImgs[0]]
res["90(1)"] = ordered_url[extraImgs[1]]
res["0(1)"] = ordered_url[extraImgs[2]]
else:
res["90(1)"] = ordered_url[extraImgs[0]]
res["0(1)"] = ordered_url[extraImgs[1]]
elif len(extraImgs) == 1:
res["0(1)"] = ordered_url[extraImgs[0]]
else:
raise ValueError(
"incoherent data information to retrieve scan" "extra images angle"
)
return res
@property
def intensity_normalization(self):
return self._intensity_normalization
@intensity_normalization.setter
def intensity_normalization(self, value):
try:
method = _IntensityMethod.from_value(value)
except ValueError:
pass
else:
self._intensity_normalization.method = method
def get_sinogram(
self,
line,
subsampling=1,
norm_method: typing.Union[None, str] = None,
**kwargs,
):
"""
extract the sinogram from projections
:param int line: which sinogram we want
:param int subsampling: subsampling to apply. Allows to skip some io
:return: computed sinogram from projections
:rtype: numpy.array
"""
if (
self.projections is not None
and self.dim_2 is not None
and line > self.dim_2
) or line < 0:
raise ValueError("requested line {} is not in the scan".format(line))
if self.projections is not None:
y_dim = ceil(len(self.projections) / subsampling)
sinogram = numpy.empty((y_dim, self.dim_1))
_logger.debug(
"compute sinogram for line {} of {} (subsampling: {})".format(
line, self.path, subsampling
)
)
advancement = Progress(
name="compute sinogram for {}, line={},"
"sampling={}".format(os.path.basename(self.path), line, subsampling)
)
advancement.setMaxAdvancement(len(self.projections))
projections = self.projections
o_keys = list(projections.keys())
o_keys.sort()
for i_proj, proj_index in enumerate(o_keys):
if i_proj % subsampling == 0:
proj_url = projections[proj_index]
proj = silx.io.utils.get_data(proj_url)
proj = self.flat_field_correction(
projs=[proj], proj_indexes=[proj_index]
)[0]
sinogram[i_proj // subsampling] = proj[line]
advancement.increaseAdvancement(1)
return self._apply_sino_norm(
sinogram,
line=line,
norm_method=norm_method,
subsampling=subsampling,
**kwargs,
)
else:
return None
def _apply_sino_norm(
self, sinogram, line, norm_method: _IntensityMethod, subsampling=1, **kwargs
) -> Optional[numpy.ndarray]:
if norm_method is not None:
norm_method = _IntensityMethod.from_value(norm_method)
if norm_method in (None, _IntensityMethod.NONE):
return sinogram
elif norm_method is _IntensityMethod.CHEBYSHEV:
return normalize_chebyshev_2D(sinogram)
elif norm_method is _IntensityMethod.LSQR_SPLINE:
return normalize_lsqr_spline_2D(sinogram)
elif norm_method in (_IntensityMethod.DIVISION, _IntensityMethod.SUBTRACTION):
# get intensity factor
if "value" in kwargs:
intensities = kwargs["value"]
_logger.info("Apply sinogram normalization from 'value' key")
elif "dataset_url" in kwargs:
_logger.info("Apply sinogram normalization from 'dataset_url' key")
try:
if isinstance(kwargs["dataset_url"], DataUrl):
url = kwargs["dataset_url"]
else:
url = DataUrl(path=kwargs["dataset_url"])
intensities = get_data(url)
except Exception as e:
_logger.error(f"Fail to load intensities. Error is {e}")
return
else:
raise KeyError(
f"{norm_method.value} requires a value or an url to be computed"
)
if intensities is None:
raise ValueError("provided normalization intensities is None")
# apply normalization
if numpy.isscalar(intensities):
if norm_method is _IntensityMethod.SUBTRACTION:
sinogram = sinogram - intensities
elif norm_method is _IntensityMethod.DIVISION:
sinogram = sinogram / intensities
else:
raise NotImplementedError
elif not isinstance(intensities, numpy.ndarray):
raise TypeError(
f"intensities is expected to be a numpy array not a ({type(intensities)})"
)
elif intensities.ndim == 1:
# in the case intensities is a 1D array: we expect to have one value per projection
for sl, value in enumerate(intensities):
if norm_method is _IntensityMethod.SUBTRACTION:
sinogram[sl] = sinogram[sl] - value
elif norm_method is _IntensityMethod.DIVISION:
sinogram[sl] = sinogram[sl] / value
elif intensities.ndim in (2, 3):
# in the case intensities is a 2D array: we expect to have one array per projection (each line has a value)
# in the case intensities is a 3D array: we expect to have one frame per projection
for sl, value in enumerate(intensities):
if norm_method is _IntensityMethod.SUBTRACTION:
sinogram[sl] = sinogram[sl] - value[line]
elif norm_method is _IntensityMethod.DIVISION:
sinogram[sl] = sinogram[sl] / value[line]
else:
raise ValueError(
f"{kwargs['dataset_url']} is expected to be 1D, 2D or 3D"
)
return sinogram
else:
raise ValueError("norm method not handled", norm_method)
def _frame_flat_field_correction(
self,
data: typing.Union[numpy.ndarray, DataUrl],
dark,
flat_weights: dict,
line: Union[None, int] = None,
):
"""
compute flat field correction for a provided data from is index
one dark and two flats (require also indexes)
"""
assert isinstance(data, (numpy.ndarray, DataUrl))
if isinstance(data, DataUrl):
data = get_data(data)
can_process = True
if flat_weights in (None, {}):
if self._notify_ffc_rsc_missing:
_logger.error(
f"cannot make flat field correction, flat not found from {self} or weights not computed"
)
can_process = False
else:
for flat_index, _ in flat_weights.items():
if flat_index not in self.reduced_flats:
_logger.error(
f"flat {flat_index} has been removed, unable to apply flat field"
)
can_process = False
elif (
self.reduced_flats is not None
and self.reduced_flats[flat_index].ndim != 2
):
_logger.error(
"cannot make flat field correction, flat should be of dimension 2"
)
can_process = False
if can_process is False:
self._notify_ffc_rsc_missing = False
if line is None:
return data
else:
return data[line]
if len(flat_weights) == 1:
flat_value = self.reduced_flats[list(flat_weights.keys())[0]]
elif len(flat_weights) == 2:
flat_keys = list(flat_weights.keys())
flat_1 = flat_keys[0]
flat_2 = flat_keys[1]
flat_value = (
self.reduced_flats[flat_1] * flat_weights[flat_1]
+ self.reduced_flats[flat_2] * flat_weights[flat_2]
)
else:
raise ValueError(
"no more than two flats are expected and"
"at least one shuold be provided"
)
if line is None:
assert data.ndim == 2
div = flat_value - dark
div[div == 0] = 1.0
return (data - dark) / div
else:
assert data.ndim == 1
div = flat_value[line] - dark[line]
div[div == 0] = 1
return (data - dark[line]) / div
def flat_field_correction(
self,
projs: typing.Iterable,
proj_indexes: typing.Iterable,
line: Union[None, int] = None,
):
"""Apply flat field correction on the given data
:param Iterable projs: list of projection (numpy array) to apply correction
on
:param Iterable data proj_indexes: list of indexes of the projection in
the acquisition sequence. Values can
be int or None. If None then the
index take will be the one in the
middle of the flats taken.
:param line: index of the line to apply flat filed. If not provided
consider we want to apply flat filed on the entire frame
:type line: None or int
:return: corrected data: list of numpy array
:rtype: list
"""
assert isinstance(projs, typing.Iterable)
assert isinstance(proj_indexes, typing.Iterable)
assert isinstance(line, (type(None), int))
def has_missing_keys():
if proj_indexes is None:
return False
for proj_index in proj_indexes:
if proj_index not in self._flats_weights:
return False
return True
def return_without_correction():
def load_data(proj):
if isinstance(proj, DataUrl):
return get_data(proj)
else:
return proj
if line is not None:
res = [
load_data(proj)[line] if isinstance(proj, DataUrl) else proj
for proj in projs
]
else:
res = [
load_data(proj) if isinstance(proj, DataUrl) else proj
for proj in projs
]
return res
if self._flats_weights in (None, {}) or has_missing_keys():
self._flats_weights = self._get_flats_weights()
if self._flats_weights in (None, {}):
if self._notify_ffc_rsc_missing:
_logger.error("Unable to compute flat weights")
self._notify_ffc_rsc_missing = False
return return_without_correction()
darks = self._reduced_darks
if darks is not None and len(darks) > 0:
# take only one dark into account for now
dark = list(darks.values())[0]
else:
dark = None
if dark is None:
if self._notify_ffc_rsc_missing:
_logger.error("cannot make flat field correction, dark not found")
self._notify_ffc_rsc_missing = False
return return_without_correction()
if dark is not None and dark.ndim != 2:
if self._notify_ffc_rsc_missing:
_logger.error(
"cannot make flat field correction, dark should be of "
"dimension 2"
)
self._notify_ffc_rsc_missing = False
return return_without_correction()
return numpy.array(
[
self._frame_flat_field_correction(
data=frame,
dark=dark,
flat_weights=self._flats_weights[proj_i]
if proj_i in self._flats_weights
else None,
line=line,
)
for frame, proj_i in zip(projs, proj_indexes)
]
)
def _get_flats_weights(self):
"""compute flats indexes to use and weights for each projection"""
if self.reduced_flats is None:
return None
flats_indexes = sorted(self.reduced_flats.keys())
def get_weights(proj_index):
if proj_index in flats_indexes:
return {proj_index: 1.0}
pos = bisect_left(flats_indexes, proj_index)
left_pos = flats_indexes[pos - 1]
if pos == 0:
return {flats_indexes[0]: 1.0}
elif pos > len(flats_indexes) - 1:
return {flats_indexes[-1]: 1.0}
else:
right_pos = flats_indexes[pos]
delta = right_pos - left_pos
return {
left_pos: 1 - (proj_index - left_pos) / delta,
right_pos: 1 - (right_pos - proj_index) / delta,
}
if self.reduced_flats is None or len(self.reduced_flats) == 0:
return {}
else:
res = {}
for proj_index in self.projections:
res[proj_index] = get_weights(proj_index=proj_index)
return res
def get_projections_intensity_monitor(self):
"""return intensity monitor values for projections"""
raise NotImplementedError("Base class")
def get_flat_expected_location(self):
raise NotImplementedError("Base class")
def get_dark_expected_location(self):
raise NotImplementedError("Base class")
def get_projection_expected_location(self):
raise NotImplementedError("Base class")
def get_energy_expected_location(self):
raise NotImplementedError("Base class")
def get_distance_expected_location(self):
raise NotImplementedError("Base class")
def get_pixel_size_expected_location(self):
raise NotImplementedError("Base class")
def get_relative_file(
self, file_name: str, with_dataset_prefix=True
) -> Optional[str]:
"""
:param str file_name: name of the file to create
:param bool with_dataset_prefix: If True will prefix the requested file by the dataset name like datasetname_file_name
:return: path to the requested file according to the 'Scan' / 'dataset' location. Return none if Scan has no path
:rtype: Optional[str]
"""
raise NotImplementedError("Base class")
def get_dataset_basename(self) -> str:
raise NotImplementedError("Base class")
def _format_file_path(self, url, entry, idx, idx_zfill4):
file_path = url.file_path()
if file_path is not None:
file_path = file_path.format(
index=str(idx),
index_zfill4=idx_zfill4,
entry=entry,
scan_prefix=self.get_dataset_basename(),
)
if not os.path.isabs(file_path):
file_path = os.path.join(self.path, file_path)
return file_path
def _dump_frame_dict(
self,
frames: dict,
output_urls,
frames_metadata: Optional[ReducedFramesInfos],
metadata_output_urls: Optional[tuple],
):
"""
utils function to save some frames in set of output_urls
Behavior with HDF5: it expects to have a dedicated group where it can save the different frame with indices.
It will do a first iteration at this group level to remove unused dataset and will overwrite the one he can in order to reduced memory print
"""
if not isinstance(frames, dict):
raise TypeError(
f"inputs `frames` is expected to be a dict not {type(frames)}"
)
if not isinstance(output_urls, (list, tuple, set)):
raise TypeError(
f"output_urls is expected to be a tuple not a {type(output_urls)}"
)
if self.path is None:
raise ValueError("No dataset path provided")
if frames_metadata is not None:
if not isinstance(frames_metadata, ReducedFramesInfos):
raise TypeError(
f"darks_infos is a {type(frames_metadata)} when None or {ReducedFramesInfos} expected"
)
self._check_reduced_infos(reduced_frames=frames, infos=frames_metadata)
def format_data_path(url, entry, idx, idx_zfill4):
data_path = url.data_path()
if data_path is not None:
data_path = data_path.format(
index=str(idx), index_zfill4=idx_zfill4, entry=entry
)
return data_path
entry = "entry"
if hasattr(self, "entry"):
entry = self.entry
def clean_frame_group(url):
"""
For HDF5 in order to avoid file size increase we need to overwrite dataset when possible.
But the darks / flats groups can contain other datasets and pollute this group.
This function will remove unused dataset (frame index) when necessary
"""
file_path = self._format_file_path(
url, entry=entry, idx=None, idx_zfill4=None
)
if not (os.path.exists(file_path) and h5py.is_hdf5(file_path)):
return
group_path = "/".join(
format_data_path(url, entry=entry, idx=0, idx_zfill4="0000").split("/")[
:-1
]
)
used_datasets = []
for idx, _ in frames.items():
idx_zfill4 = str(idx).zfill(4)
used_datasets.append(
format_data_path(
url, entry=entry, idx=idx, idx_zfill4=idx_zfill4
).split("/")[-1]
)
with HDF5File(file_path, mode="a") as h5s:
if group_path in h5s:
for key in h5s[group_path].keys():
if key not in used_datasets:
del h5s[group_path][key]
# save data
for url in output_urls:
clean_frame_group(url=url)
# first delete keys that are no more used
for i_frame, (idx, frame) in enumerate(frames.items()):
if not isinstance(frame, numpy.ndarray):
raise TypeError("frames are expected to be 2D numpy.ndarray")
elif frame.ndim == 3 and frame.shape[0] == 1:
frame = frame.reshape([frame.shape[1], frame.shape[2]])
elif frame.ndim != 2:
raise ValueError("frames are expected to be 2D numpy.ndarray")
idx_zfill4 = str(idx).zfill(4)
data_path = format_data_path(
url, entry=entry, idx=idx, idx_zfill4=idx_zfill4
)
file_path = self._format_file_path(
url, entry=entry, idx=idx, idx_zfill4=idx_zfill4
)
# small hack to insure 'flats' or 'darks' group are cleaned when start to write in
for i_frame, (idx, frame) in enumerate(frames.items()):
if not isinstance(frame, numpy.ndarray):
raise TypeError("frames are expected to be 2D numpy.ndarray")
elif frame.ndim == 3 and frame.shape[0] == 1:
frame = frame.reshape([frame.shape[1], frame.shape[2]])
elif frame.ndim != 2:
raise ValueError("frames are expected to be 2D numpy.ndarray")
idx_zfill4 = str(idx).zfill(4)
data_path = format_data_path(
url, entry=entry, idx=idx, idx_zfill4=idx_zfill4
)
file_path = self._format_file_path(
url, entry=entry, idx=idx, idx_zfill4=idx_zfill4
)
scheme = url.scheme()
if scheme == "fabio":
if data_path is not None:
raise ValueError("fabio does not handle data_path")
else:
# for edf: add metadata to the header if some, without taking into account the
# metadata_output_urls (too complicated for backward compatibility...)
header = {}
if (
frames_metadata is not None
and len(frames_metadata.machine_electric_current) > 0
):
header["SRCUR"] = frames_metadata.machine_electric_current[
i_frame
]
if (
frames_metadata is not None
and len(frames_metadata.count_time) > 0
):
header["CountTime"] = frames_metadata.count_time[i_frame]
edf_writer = fabio.edfimage.EdfImage(
data=frame,
header=header,
)
edf_writer.write(file_path)
elif scheme in ("hdf5", "silx"):
with HDF5File(file_path, mode="a") as h5s:
if data_path in h5s:
h5s[data_path][()] = frame
else:
h5s[data_path] = frame
h5s[data_path].attrs["interpretation"] = "image"
else:
raise ValueError(
f"scheme {scheme} is not handled for frames. Should be fabio, silx of hdf5"
)
frames_indexes = [idx for idx, _ in frames.items()]
if frames_metadata is not None:
for url, idx in zip(metadata_output_urls, frames_indexes):
idx_zfill4 = str(idx).zfill(4)
metadata_grp_path = format_data_path(
url, entry=entry, idx=idx, idx_zfill4=idx_zfill4
)
file_path = self._format_file_path(
url, entry=entry, idx=idx, idx_zfill4=idx_zfill4
)
scheme = url.scheme()
for metadata_name, metadata_values in frames_metadata.to_dict().items():
# warning: for now we only handle list (of count_time and machine_electric_current)
if len(metadata_values) == 0:
continue
else:
# save metadata
if scheme in ("hdf5", "silx"):
with HDF5File(file_path, mode="a") as h5s:
metadata_path = "/".join(
[metadata_grp_path, metadata_name]
)
if metadata_path in h5s:
del h5s[metadata_path]
h5s[metadata_path] = metadata_values
unit = None
if metadata_name == ReducedFramesInfos.COUNT_TIME_KEY:
unit = TimeSystem.SECOND
elif (
metadata_name
== ReducedFramesInfos.MACHINE_ELECT_CURRENT_KEY
):
unit = ElectricCurrentSystem.AMPERE
if unit is not None:
h5s[metadata_path].attrs["units"] = str(unit)
else:
raise ValueError(
f"scheme {scheme} is not handled for frames metadata. Should be silx of hdf5"
)
def _load_frame_dict(
self, inputs_urls, metadata_input_urls, return_as_url=False
) -> dict:
"""
:note: note on pattern:
* handled patterns are:
* file_path pattern:
* {index}: only handled for edf files
* {index_zfill4}: only handled for edf files
* Only one usage of index and index_zfill4 can be done. Having several {index} or one {index} and one {index_zfill4} will fail
* data_path pattern:
* {entry}
* {index}: works only if set at the end of the path (as dataset name)
* {index_zfill4}: works only if set at the end of the path (as dataset name)
:return: tuple(frames_data, frames_metadata).
* frames_data: dict with frame index in the acquisition sequence as key. Value is the frame as a numpy array if return_as_url is False else this is a DataUrl to the frame
* frames_metadata: return an instance of ReducedFramesInfos. We consider this is too small to use the DataUrl mecanism when return_as_url set to True
:rtype: dict
"""
from tomoscan.esrf.scan.utils import (
get_files_from_pattern,
) # avoid cyclic import
if self.path is None:
raise ValueError("No dataset path provided")
res_data = {}
entry = "entry"
if hasattr(self, "entry"):
entry = self.entry
res_metadata = ReducedFramesInfos()
# load frames infos
for url in inputs_urls:
data_path = url.data_path()
if data_path is not None:
data_path = data_path.format(
entry=entry, index_zfill4="{index_zfill4}", index="{index}"
)
# we don't want to handle index_zfill4 and index at this level
file_pattern = url.file_path()
if file_pattern is not None:
file_pattern = file_pattern.format(
entry,
entry,
index_zfill4="{index_zfill4}",
index="{index}",
scan_prefix=self.get_dataset_basename(),
)
# we don't want to handle index_zfill4 and index at this level
scheme = url.scheme()
frames_path_and_index = []
# list of tuples (frame_file, frame_index). frame_index can be None if not found
patterns = ("index_zfill4", "index")
contains_patterns = False
for pattern in patterns:
if pattern in file_pattern:
contains_patterns = True
files_from_pattern = get_files_from_pattern(
file_pattern=file_pattern,
pattern=pattern,
research_dir=self.path,
)
for frame_index, frame_file in files_from_pattern.items():
frames_path_and_index.append(
(os.path.join(self.path, frame_file), frame_index)
)
if not contains_patterns:
frames_path_and_index.append(
(os.path.join(self.path, file_pattern), None)
)
def format_data_path(data_path):
index_zfill4_pattern = False
if data_path.endswith("{index_zfill4}"):
index_zfill4_pattern = True
data_path = data_path[: -len("{index_zfill4}")]
if data_path.endswith("{index}"):
data_path = data_path[: -len("{index}")]
if data_path.endswith("/"):
data_path = data_path[:-1]
return data_path, index_zfill4_pattern
for frame_file_path, frame_index in frames_path_and_index:
if scheme == "fabio":
if not os.path.exists(frame_file_path):
continue
try:
handler = fabio.open(frame_file_path)
with fabio.open(frame_file_path) as handler:
if handler.nframes > 1:
_logger.warning(
f"{frame_file_path} is expected to have one frame. Only the first one will be picked"
)
if frame_index in res_data:
_logger.error(
f"two frames found with the same index {frame_index}"
)
if return_as_url:
res_data[frame_index] = DataUrl(
file_path=frame_file_path, scheme="fabio"
)
else:
res_data[frame_index] = handler.data
if "SRCUR" in handler.header:
res_metadata.machine_electric_current.append(
float(handler.header["SRCUR"])
)
if "CountTime" in handler.header:
res_metadata.count_time.append(
float(handler.header["CountTime"])
)
except OSError as e:
_logger.error(e)
elif scheme in ("hdf5", "silx"):
data_path, index_zfill4_pattern = format_data_path(data_path)
if not os.path.exists(frame_file_path):
continue
with HDF5File(frame_file_path, mode="r") as h5s:
dataset_or_group = h5s[data_path]
if isinstance(dataset_or_group, h5py.Dataset):
idx = None
if dataset_or_group.name.isnumeric():
try:
idx = int(dataset_or_group)
except ValueError:
idx = None
if return_as_url:
res_data[idx] = DataUrl(
file_path=frame_file_path,
data_path=data_path,
scheme="silx",
)
else:
res_data[idx] = dataset_or_group[()]
else:
assert isinstance(
dataset_or_group, h5py.Group
), f"expect a group not {type(dataset_or_group)}"
# browse childrens
for name, item in dataset_or_group.items():
if isinstance(item, h5py.Dataset):
if name.isnumeric():
if index_zfill4_pattern and not len(name) == 4:
continue
else:
try:
idx = int(name)
except ValueError:
_logger.info(
f"fail to cast {name} as a integer"
)
continue
if return_as_url:
res_data[idx] = DataUrl(
file_path=frame_file_path,
data_path=data_path + "/" + name,
scheme="silx",
)
else:
res_data[idx] = dataset_or_group[name][
()
]
else:
raise ValueError(
f"scheme {scheme} is not handled. Should be fabio, silx of hdf5"
)
def get_unit_factor(attrs, metric_system):
if "unit" in attrs:
return metric_system.from_str(attrs["unit"]).value
elif "units":
return metric_system.from_str(attrs["units"]).value
return 1.0
# load frames metadata
if metadata_input_urls is not None:
for url in metadata_input_urls:
metadata_file = url.file_path()
metadata_file = metadata_file.format(
scan_prefix=self.get_dataset_basename(),
)
if not os.path.isabs(metadata_file):
metadata_file = os.path.join(self.path, metadata_file)
data_path = url.data_path().format(
entry=entry,
)
if scheme in ("hdf5", "silx"):
if not os.path.exists(metadata_file):
continue
with HDF5File(metadata_file, mode="r") as h5s:
if data_path not in h5s:
continue
parent_group = h5s[data_path]
if ReducedFramesInfos.COUNT_TIME_KEY in parent_group:
count_time = silx.io.utils.h5py_read_dataset(
parent_group[ReducedFramesInfos.COUNT_TIME_KEY]
)
unit_factor = get_unit_factor(
attrs=parent_group[
ReducedFramesInfos.COUNT_TIME_KEY
].attrs,
metric_system=TimeSystem,
)
res_metadata.count_time = count_time * unit_factor
if ReducedFramesInfos.MACHINE_ELECT_CURRENT_KEY in parent_group:
machine_electric_current = silx.io.utils.h5py_read_dataset(
parent_group[
ReducedFramesInfos.MACHINE_ELECT_CURRENT_KEY
]
)
unit_factor = get_unit_factor(
attrs=parent_group[
ReducedFramesInfos.MACHINE_ELECT_CURRENT_KEY
].attrs,
metric_system=ElectricCurrentSystem,
)
res_metadata.machine_electric_current = (
machine_electric_current * unit_factor
)
return res_data, res_metadata
@staticmethod
def _check_reduced_infos(reduced_frames, infos):
incoherent_metadata_mess = "incoherent provided infos:"
incoherent_metadata = False
if len(infos.count_time) not in (0, len(reduced_frames)):
incoherent_metadata = True
incoherent_metadata_mess += f"\n - count_time gets {len(infos.count_time)} when 0 or {len(reduced_frames)} expected"
if len(infos.machine_electric_current) not in (0, len(reduced_frames)):
incoherent_metadata = True
incoherent_metadata_mess += f"\n - machine_electric_current gets {len(infos.machine_electric_current)} when 0 or {len(reduced_frames)} expected"
if incoherent_metadata:
raise ValueError(incoherent_metadata_mess)
def save_reduced_darks(
self,
darks: dict,
output_urls: tuple,
darks_infos: Optional[ReducedFramesInfos] = None,
metadata_output_urls: Optional[tuple] = None,
) -> None:
"""
Dump computed dark (median / mean...) into files
"""
self._dump_frame_dict(
frames=darks,
output_urls=output_urls,
frames_metadata=darks_infos,
metadata_output_urls=metadata_output_urls,
)
def load_reduced_darks(
self,
inputs_urls: tuple,
metadata_input_urls=None,
return_as_url: bool = False,
return_info: bool = False,
) -> Union[dict, tuple]:
"""
load reduced dark (median / mean...) into files
"""
darks, infos = self._load_frame_dict(
inputs_urls=inputs_urls,
return_as_url=return_as_url,
metadata_input_urls=metadata_input_urls,
)
if return_info:
return darks, infos
else:
return darks
def save_reduced_flats(
self,
flats: dict,
output_urls: tuple,
flats_infos: Optional[ReducedFramesInfos] = None,
metadata_output_urls: Optional[tuple] = None,
) -> None:
"""
Dump reduced flats (median / mean...) into files
"""
self._dump_frame_dict(
frames=flats,
output_urls=output_urls,
frames_metadata=flats_infos,
metadata_output_urls=metadata_output_urls,
)
def load_reduced_flats(
self,
inputs_urls: tuple,
metadata_input_urls=None,
return_as_url: bool = False,
return_info: bool = False,
) -> Union[dict, tuple]:
"""
load reduced dark (median / mean...) into files
"""
flats, infos = self._load_frame_dict(
inputs_urls=inputs_urls,
return_as_url=return_as_url,
metadata_input_urls=metadata_input_urls,
)
if return_info:
return flats, infos
else:
return flats
def compute_reduced_flats(
self,
reduced_method="median",
overwrite=True,
output_dtype=None,
return_info=False,
):
"""
:param ReduceMethod method: method to compute the flats
:param bool overwrite: if some flats have already been computed will overwrite them
:param bool return_info: do we return (reduced_frames, info) or directly reduced_frames
"""
if self.FRAME_REDUCER_CLASS is None:
raise ValueError("no frame reducer class provided")
frame_reducer = self.FRAME_REDUCER_CLASS( # pylint: disable=E1102
scan=self,
reduced_method=reduced_method,
target="flats",
overwrite=overwrite,
output_dtype=output_dtype,
)
reduced_frames, infos = frame_reducer.run()
if return_info:
return reduced_frames, infos
else:
return reduced_frames
def compute_reduced_darks(
self,
reduced_method="mean",
overwrite=True,
output_dtype=None,
return_info=False,
):
"""
:param ReduceMethod method: method to compute the flats
:param bool overwrite: if some darks have already been computed will overwrite them
:param bool return_info: do we return (reduced_frames, info) or directly reduced_frames
"""
if self.FRAME_REDUCER_CLASS is None:
raise ValueError("no frame reducer class provided")
frame_reducer = self.FRAME_REDUCER_CLASS( # pylint: disable=E1102
scan=self,
reduced_method=reduced_method,
target="darks",
overwrite=overwrite,
output_dtype=output_dtype,
)
reduced_frames, infos = frame_reducer.run()
if return_info:
return reduced_frames, infos
else:
return reduced_frames
@staticmethod
def get_volume_output_file_name(z=None, suffix=None):
"""if used by tomwer and nabu this should help for tomwer to find out the output files of anbu from a configuration file. Could help to get some normalization there"""
raise NotImplementedError
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1660290689.0
tomoscan-1.2.2/tomoscan/scanfactory.py 0000644 0236253 0006511 00000000457 00000000000 020212 0 ustar 00payno soft 0000000 0000000 from silx.utils.deprecation import deprecated_warning
deprecated_warning(
"Class",
name="tomoscan.scanfactory.ScanFactory",
reason="Has been moved",
replacement="tomoscan.factory.TomoObjectFactory",
only_once=True,
)
from tomoscan.factory import Factory as ScanFactory # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/serie.py 0000644 0236253 0006511 00000020070 00000000000 016776 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016- 2020 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
"""Module with utils in order to define series of scan (TomoScanBase)"""
__authors__ = ["H.Payno"]
__license__ = "MIT"
__date__ = "10/01/2021"
from typing import Iterable, Optional
from tomoscan.scanbase import TomoScanBase
from tomoscan.tomoobject import TomoObject
from .factory import Factory
from .identifier import BaseIdentifier
import logging
_logger = logging.getLogger(__name__)
class Serie(list):
"""
A serie can be view as an extented list of :class:`TomoObject`.
This allow the user to define a relation between scans like:
.. image:: img/scan_serie_class_diagram.png
"""
def __init__(
self, name: Optional[str] = None, iterable=None, use_identifiers=False
) -> None:
self._name = "Unknow" if name is None else name
self.__use_identifiers = use_identifiers
if iterable is None:
iterable = []
super().__init__()
for item in iterable:
self.append(item)
@property
def name(self) -> str:
return self._name
@name.setter
def name(self, name: str):
if not isinstance(name, str):
raise TypeError("name is expected to be an instance of str")
else:
self._name = name
@property
def use_identifiers(self):
return self.__use_identifiers
def append(self, object: TomoObject):
if not isinstance(object, TomoObject):
raise TypeError(
f"object is expected to be an instance of {TomoObject} not {type(object)}"
)
if self.use_identifiers:
super().append(object.get_identifier().to_str())
else:
super().append(object)
def remove(self, object: TomoObject):
if not isinstance(object, TomoObject):
raise TypeError(
f"object is expected to be an instance of {TomoObject} not {type(object)}"
)
if self.use_identifiers:
super().remove(object.get_identifier().to_str())
else:
super().remove(object)
def to_dict_of_str(self) -> dict:
"""
call for each scan DatasetIdentifier.to_str() if use dataset identifier.
Otherwise return a default list with dataset identifiers
"""
objects = []
for dataset in self:
if self.use_identifiers:
objects.append(dataset)
else:
objects.append(dataset.get_identifier().to_str())
return {
"objects": objects,
"name": self.name,
"use_identifiers": self.use_identifiers,
}
@staticmethod
def from_dict_of_str(
dict_, factory=Factory, use_identifiers: Optional[bool] = None
):
"""
create a Serie from it definition from a dictionary
:param dict dict_: dictionary containing the serie to create
:param factory: factory to use in order to create scans defined from there Identifier (as an instance of DatasetIdentifier or is str representation)
:type factory: Factory
:param Optional[bool] use_identifiers: use_identifiers can be overwrite when creating the serie
:return: created Serie
:rtype: Serie
"""
name = dict_["name"]
objects = dict_["objects"]
if use_identifiers is None:
use_identifiers = dict_.get("use_identifiers", False)
instanciated_scans = []
for tomo_obj in objects:
if isinstance(tomo_obj, (str, BaseIdentifier)):
instanciated_scans.append(
factory.create_tomo_object_from_identifier(identifier=tomo_obj)
)
else:
raise TypeError(
f"elements of dict_['objects'] are expected to be an instance of TomoObject, DatasetIdentifier or str representing a DatasetIdentifier. Not {type(tomo_obj)}"
)
return Serie(
name=name, use_identifiers=use_identifiers, iterable=instanciated_scans
)
def __contains__(self, tomo_obj: BaseIdentifier):
if self.use_identifiers:
key = tomo_obj.get_identifier().to_str()
else:
key = tomo_obj
return super().__contains__(key)
def __eq__(self, other):
if not isinstance(other, Serie):
return False
return self.name == other.name and super().__eq__(other)
def __ne__(self, other):
return not self.__eq__(other)
def sequences_to_series_from_sample_name(scans: Iterable) -> tuple:
"""
create a serie with the same sample name
:param Iterable scans:
:return: tuple of serie if as_tuple_of_list is false else a tuple of list (of TomoScanBase)
"""
series = {}
for scan in scans:
if not isinstance(scan, TomoScanBase):
raise TypeError("Elements are expected to be instances of TomoScanBase")
if scan.sample_name is None:
_logger.warning(f"no scan sample found for {scan}")
if scan.sample_name not in series:
series[scan.sample_name] = Serie(use_identifiers=False)
series[scan.sample_name].append(scan)
return tuple(series.values())
def check_serie_is_consistant_frm_sample_name(scans: Iterable):
"""
Insure the provided group of scan is valid. Otherwise raise an error
:param Iterable scans: group of TomoScanBAse to check
"""
l_scans = set()
for scan in scans:
if not isinstance(scan, TomoScanBase):
raise TypeError("Elements are expected to be instance of TomoScanBase")
if scan in l_scans:
raise ValueError("{} is present at least twice")
elif len(l_scans) > 0:
first_scan = next(iter((l_scans)))
if first_scan.sample_name != scan.sample_name:
raise ValueError(
f"{scan} and {first_scan} are from two different sample: {scan.sample_name} and {first_scan.sample_name}"
)
l_scans.add(scan)
def serie_is_complete_from_group_size(scans: Iterable) -> bool:
"""
Insure the provided group of scan is valid. Otherwise raise an error
:param Iterable scans: group of TomoScanBAse to check
:return: True if the group is complete
:rtype: bool
"""
if len(scans) == 0:
return True
try:
check_serie_is_consistant_frm_sample_name(scans=scans)
except Exception as e:
_logger.error("provided group is invalid. {}".format(e))
raise e
else:
group_size = next(iter(scans)).group_size
if group_size is None:
_logger.warning("No information found regarding group size")
return True
elif group_size == len(scans):
return True
elif group_size < len(scans):
_logger.warning("more scans found than group_size")
return True
else:
return False
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.165433
tomoscan-1.2.2/tomoscan/test/ 0000755 0236253 0006511 00000000000 00000000000 016275 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1670330888.0
tomoscan-1.2.2/tomoscan/test/__init__.py 0000644 0236253 0006511 00000000000 00000000000 020374 0 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/conftest.py 0000644 0236253 0006511 00000000411 00000000000 020470 0 ustar 00payno soft 0000000 0000000 from tempfile import TemporaryDirectory
import pytest
@pytest.fixture(scope="session", autouse=True)
def changetmp(request):
with TemporaryDirectory(prefix="pytest--") as temp_dir:
request.config.option.basetemp = temp_dir
yield
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_framereducerbase.py 0000644 0236253 0006511 00000003303 00000000000 023204 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "06/01/2022"
from tomoscan.framereducerbase import FrameReducerBase
from tomoscan.esrf.mock import MockHDF5
import pytest
def test_FrameReducerBase_instanciation(tmp_path):
scan = MockHDF5(tmp_path, n_proj=2).scan
reducer = FrameReducerBase(scan=scan, reduced_method="mean", target="darks")
with pytest.raises(NotImplementedError):
reducer.run()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_hdf5_utils.py 0000644 0236253 0006511 00000000702 00000000000 021753 0 ustar 00payno soft 0000000 0000000 import pytest
from tomoscan.utils.hdf5 import DatasetReader
from silx.io.url import DataUrl
def test_errors_DatasetReader():
with pytest.raises(TypeError):
with DatasetReader("toto"):
pass
with pytest.raises(ValueError):
with DatasetReader(DataUrl()):
pass
with pytest.raises(ValueError):
with DatasetReader(DataUrl(file_path="test", data_path="dssad", data_slice=2)):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_io.py 0000644 0236253 0006511 00000005440 00000000000 020320 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "18/09/2020"
import os
import h5py
import numpy
import shutil
import unittest
import tempfile
from tomoscan.io import check_virtual_sources_exist
class TestCheckVirtualSourcesExists(unittest.TestCase):
"""insure the check_virtual_sources_exist function exists"""
def setUp(self) -> None:
self.folder = tempfile.mkdtemp()
self.h5_file = os.path.join(self.folder, "myfile.hdf5")
def tearDown(self) -> None:
shutil.rmtree(self.folder)
def test_check_virtual_sources_exist_vds(self):
with h5py.File(self.h5_file, mode="w") as h5f:
h5f["data"] = numpy.random.random((120, 120))
self.assertTrue(check_virtual_sources_exist(self.h5_file, "data"))
def test_check_virtual_sources_exist_no_vds(self):
# create some dataset
for i in range(4):
filename = os.path.join(self.folder, f"{i}.h5")
with h5py.File(filename, mode="w") as h5f:
h5f.create_dataset("data", (100,), "i4", numpy.arange(100))
layout = h5py.VirtualLayout(shape=(4, 100), dtype="i4")
for i in range(4):
filename = os.path.join(self.folder, f"{i}.h5")
layout[i] = h5py.VirtualSource(filename, "data", shape=(100,))
with h5py.File(self.h5_file, mode="w") as h5f:
# create the virtual dataset
h5f.create_virtual_dataset("data", layout, fillvalue=-5)
self.assertTrue(check_virtual_sources_exist(self.h5_file, "data"))
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_normalization.py 0000644 0236253 0006511 00000016450 00000000000 022602 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "31/08/2021"
from tomoscan.test.utils import HDF5MockContext
from tomoscan.nexus.paths.nxtomo import nx_tomo_path_latest
import tomoscan.esrf.hdf5scan
import tomoscan.normalization
import os
import tempfile
import numpy
import pytest
from typing import Union
import h5py
from silx.io.url import DataUrl
try:
import scipy.interpolate # noqa F401
except ImportError:
has_scipy = False
else:
has_scipy = True
@pytest.mark.parametrize("method", ["subtraction", "division"])
def test_normalization_scalar_normalization(method):
"""test scalar normalization"""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
) as scan:
with pytest.raises(KeyError):
scan.get_sinogram(line=2, norm_method=method)
scan.get_sinogram(line=2, norm_method=method, value=12.2)
def test_normalize_chebyshev_2D():
"""Test checbychev 2D normalization"""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
) as scan:
sinogram = scan.get_sinogram(line=2)
tomoscan.normalization.normalize_chebyshev_2D(sinogram)
sinogram_2 = scan.get_sinogram(line=2, norm_method="chebyshev")
assert numpy.array_equal(sinogram, sinogram_2)
@pytest.mark.skipif(condition=not has_scipy, reason="scipy missing")
def test_normalize_lsqr_spline_2D():
"""test lsqr_spline_2D normalization"""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
) as scan:
sinogram = scan.get_sinogram(line=2)
tomoscan.normalization.normalize_lsqr_spline_2D(sinogram)
sinogram_2 = scan.get_sinogram(line=2, norm_method="lsqr spline")
assert numpy.array_equal(sinogram, sinogram_2)
def test_normalize_dataset():
"""Test extra information that can be provided relative to a dataset"""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
dim=100,
intensity_monitor=True,
) as scan:
datasetinfo = tomoscan.normalization._DatasetInfos()
datasetinfo.file_path = scan.master_file
datasetinfo.data_path = "/".join(
[scan.entry, nx_tomo_path_latest.INTENSITY_MONITOR_PATH]
)
assert isinstance(datasetinfo.data_path, str)
assert isinstance(datasetinfo.file_path, str)
datasetinfo.scope = tomoscan.normalization._DatasetScope.GLOBAL
assert isinstance(datasetinfo.scope, tomoscan.normalization._DatasetScope)
scan.intensity_normalization.set_extra_infos(datasetinfo)
def test_normalize_roi():
"""Test extra information that can be provided relative to a roi"""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
dim=100,
intensity_monitor=True,
) as scan:
roi_info = tomoscan.normalization._ROIInfo()
scan.intensity_normalization.set_extra_infos(roi_info)
scan.intensity_normalization.method = None
scan.intensity_normalization.method = "lsqr spline"
assert isinstance(
scan.intensity_normalization.method, tomoscan.normalization.Method
)
str(scan.intensity_normalization)
@pytest.mark.parametrize("norm_method", ("subtraction", "division"))
@pytest.mark.parametrize("as_url", (True, False))
@pytest.mark.parametrize(
"values",
(
0.5,
numpy.arange(1, 11),
numpy.arange(1, 101).reshape(10, 10),
numpy.arange(1, 1001).reshape(10, 10, 10),
),
)
def test_get_sinogram(
tmp_path, norm_method, as_url, values: Union[float, numpy.ndarray]
):
test_dir = tmp_path / "test1"
test_dir.mkdir()
params = {}
if as_url:
file_path = str(test_dir / "tmp_file.hdf5")
with h5py.File(file_path, mode="w") as root:
root["data"] = values
params["dataset_url"] = DataUrl(
file_path=file_path, data_path="data", scheme="silx"
)
else:
params["value"] = values
with HDF5MockContext(
scan_path=os.path.join(test_dir, "scan_test"),
n_proj=10,
n_ini_proj=10,
dim=10,
) as scan:
raw_sinogram = scan.get_sinogram(line=2)
norm_sinogram = scan.get_sinogram(line=2, norm_method=norm_method, **params)
assert isinstance(raw_sinogram, numpy.ndarray)
assert isinstance(norm_sinogram, numpy.ndarray)
if numpy.isscalar(values):
if norm_method == "subtraction":
numpy.testing.assert_almost_equal(norm_sinogram, raw_sinogram - values)
elif norm_method == "division":
numpy.testing.assert_almost_equal(norm_sinogram, raw_sinogram / values)
else:
raise ValueError
elif values.ndim == 1:
expected_sinogram = raw_sinogram.copy()
for i_proj, proj_value in enumerate(values):
if norm_method == "subtraction":
expected_sinogram[i_proj] = raw_sinogram[i_proj] - proj_value
elif norm_method == "division":
expected_sinogram[i_proj] = raw_sinogram[i_proj] / proj_value
else:
raise ValueError(norm_method)
numpy.testing.assert_almost_equal(norm_sinogram, expected_sinogram)
elif values.ndim in (2, 3):
expected_sinogram = raw_sinogram.copy()
for i_proj, proj_value in enumerate(values):
if norm_method == "subtraction":
expected_sinogram[i_proj] = raw_sinogram[i_proj] - proj_value[2]
elif norm_method == "division":
expected_sinogram[i_proj] = raw_sinogram[i_proj] / proj_value[2]
else:
raise ValueError(norm_method)
numpy.testing.assert_almost_equal(norm_sinogram, expected_sinogram)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_progress.py 0000644 0236253 0006511 00000003735 00000000000 021562 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module for giving information on process progress"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "31/08/2021"
import tomoscan.progress
def test_progress():
"""Simple test of the Progress API"""
progress = tomoscan.progress.Progress("this is progress")
progress.reset()
progress.startProcess()
progress.setMaxAdvancement(80)
for adv in (10, 20, 50, 70):
progress.setAdvancement(adv)
for i in range(10):
progress.increaseAdvancement(1)
def test_advancement():
"""Simple test of the _Advancement API"""
for i in range(4):
tomoscan.progress._Advancement.getNextStep(
tomoscan.progress._Advancement.getStep(i)
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_scanbase.py 0000644 0236253 0006511 00000021453 00000000000 021472 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "01/09/2021"
from copy import deepcopy
import unittest
import numpy.random
from tomoscan.scanbase import ReducedFramesInfos, TomoScanBase
from tomoscan.scanbase import Source, SourceType
from tomoscan.test.utils import HDF5MockContext
import shutil
import tempfile
from silx.io.url import DataUrl
import h5py
import os
import pytest
class TestFlatFieldCorrection(unittest.TestCase):
def setUp(self):
self.data_dir = tempfile.mkdtemp()
self.scan = TomoScanBase(None, None)
self.scan.set_reduced_darks(
{
0: numpy.random.random(100).reshape((10, 10)),
}
)
self.scan.set_reduced_flats(
{
1: numpy.random.random(100).reshape((10, 10)),
12: numpy.random.random(100).reshape((10, 10)),
21: numpy.random.random(100).reshape((10, 10)),
}
)
self._data_urls = {}
projections = {}
file_path = os.path.join(self.data_dir, "data_file.h5")
for i in range(-2, 30):
projections[i] = numpy.random.random(100).reshape((10, 10))
data_path = "/".join(("data", str(i)))
self._data_urls[i] = DataUrl(
file_path=file_path, data_path=data_path, scheme="silx"
)
with h5py.File(file_path, mode="a") as h5s:
h5s[data_path] = projections[i]
self.scan.projections = projections
def tearDown(self):
shutil.rmtree(self.data_dir)
def test_get_flats_weights(self):
"""test the _get_flats_weights function and insure flat weights
are correct"""
flat_weights = self.scan._get_flats_weights()
self.assertTrue(isinstance(flat_weights, dict))
self.assertEqual(len(flat_weights), 32)
self.assertEqual(flat_weights.keys(), self.scan.projections.keys())
self.assertEqual(flat_weights[-2], {1: 1.0})
self.assertEqual(flat_weights[0], {1: 1.0})
self.assertEqual(flat_weights[1], {1: 1.0})
self.assertEqual(flat_weights[12], {12: 1.0})
self.assertEqual(flat_weights[21], {21: 1.0})
self.assertEqual(flat_weights[24], {21: 1.0})
def assertAlmostEqual(ddict1, ddict2):
self.assertEqual(ddict1.keys(), ddict2.keys())
for key in ddict1.keys():
self.assertAlmostEqual(ddict1[key], ddict2[key])
assertAlmostEqual(flat_weights[2], {1: 10.0 / 11.0, 12: 1.0 / 11.0})
assertAlmostEqual(flat_weights[10], {1: 2.0 / 11.0, 12: 9.0 / 11.0})
assertAlmostEqual(flat_weights[18], {12: 3.0 / 9.0, 21: 6.0 / 9.0})
def test_flat_field_data_url(self):
"""insure the flat_field is computed. Simple processing test when
provided data is a DataUrl"""
projections_keys = [key for key in self.scan.projections.keys()]
projections_urls = [self.scan.projections[key] for key in projections_keys]
self.scan.flat_field_correction(projections_urls, projections_keys)
def test_flat_field_data_numpy_array(self):
"""insure the flat_field is computed. Simple processing test when
provided data is a numpy array"""
self.scan.projections = self._data_urls
projections_keys = [key for key in self.scan.projections.keys()]
projections_urls = [self.scan.projections[key] for key in projections_keys]
self.scan.flat_field_correction(projections_urls, projections_keys)
def test_Source_API():
"""Test Source API"""
source = Source(name="my source", type=SourceType.SYNCHROTRON_X_RAY_SOURCE)
source.name = "toto"
with pytest.raises(TypeError):
source.name = 12
assert isinstance(source.name, str)
source.type = SourceType.FREE_ELECTRON_LASER
assert isinstance(source.type, SourceType)
source.type = None
str(source)
def test_TomoScanBase_API():
"""Test TomoScanBase API"""
with pytest.raises(NotImplementedError):
TomoScanBase.is_tomoscan_dir("")
with pytest.raises(NotImplementedError):
TomoScanBase(scan="", type_="undefined").is_abort()
scan = TomoScanBase(scan="", type_="undefined")
scan.source
scan.flats = {1: numpy.random.random(100 * 100).reshape(100, 100)}
assert len(scan.flats) == 1
scan.darks = {0: numpy.random.random(100 * 100).reshape(100, 100)}
assert len(scan.darks) == 1
scan.alignment_projections = {
2: numpy.random.random(100 * 100).reshape(100, 100),
3: numpy.random.random(100 * 100).reshape(100, 100),
}
assert len(scan.alignment_projections) == 2
for prop in (
"dark_n",
"tomo_n",
"flat_n",
"pixel_size",
"instrument_name",
"dim_1",
"dim_2",
"scan_range",
"ff_interval",
"energy",
"intensity_monitor",
"field_of_view",
"estimated_cor_frm_motor",
"x_translation",
"y_translation",
"z_translation",
"sequence_name",
"sample_name",
"group_size",
):
with pytest.raises(NotImplementedError):
getattr(scan, prop)
assert isinstance(scan.to_dict(), dict)
for fct in (
"update",
"get_proj_angle_url",
"get_projections_intensity_monitor",
"get_flat_expected_location",
"get_dark_expected_location",
"get_projection_expected_location",
"get_energy_expected_location",
"get_distance_expected_location",
"get_pixel_size_expected_location",
):
with pytest.raises(NotImplementedError):
getattr(scan, fct)()
def test_save_load_reduced_darks(tmpdir):
with HDF5MockContext(
scan_path=os.path.join(tmpdir, "test_save_load_reduced_darks"),
n_proj=10,
n_ini_proj=10,
distance=1.0,
energy=1.0,
) as scan:
with pytest.raises(TypeError):
scan.save_reduced_darks(
darks=None,
output_urls=(scan.REDUCED_DARKS_DATAURLS,),
)
with pytest.raises(TypeError):
scan.save_reduced_darks(
darks={
0: numpy.ones((10, 10)),
},
output_urls=None,
)
scan.path = None
with pytest.raises(ValueError):
scan.save_reduced_darks(
darks={
0: numpy.ones((10, 10)),
},
output_urls=(scan.REDUCED_DARKS_DATAURLS,),
)
with pytest.raises(ValueError):
scan.save_reduced_darks(
darks={
0: numpy.ones((10, 10)),
},
output_urls=(scan.REDUCED_DARKS_DATAURLS,),
)
def test_ReducedFramesInfo():
"""
test ReducedFramesInfos class
"""
infos = ReducedFramesInfos()
assert infos.to_dict() == {}
infos.count_time = numpy.array([12.3, 13.0])
assert infos.count_time == [12.3, 13.0]
infos.machine_electric_current = [23.5, 56.9]
assert infos.machine_electric_current == [23.5, 56.9]
my_dict = deepcopy(infos.to_dict())
assert my_dict == {
ReducedFramesInfos.COUNT_TIME_KEY: [12.3, 13.0],
ReducedFramesInfos.MACHINE_ELECT_CURRENT_KEY: [23.5, 56.9],
}
infos.clear()
assert infos.to_dict() == {}
new_infos = ReducedFramesInfos()
new_infos.load_from_dict(my_dict)
assert new_infos.to_dict() == my_dict
with pytest.raises(TypeError):
new_infos.count_time = 12
with pytest.raises(TypeError):
new_infos.machine_electric_current = 12
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_scanfactory.py 0000644 0236253 0006511 00000010727 00000000000 022231 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "24/01/2017"
import os
from tomoscan.esrf.edfscan import EDFTomoScan
from tomoscan.esrf.hdf5scan import HDF5TomoScan
from tomoscan.scanbase import TomoScanBase
from tomoscan.factory import Factory
from tomoscan.test.utils import UtilsTest
from tomoscan.esrf.mock import MockEDF
import tempfile
import pytest
def test_scan_edf():
"""can we create a TomoScanBase object from a folder containing a
valid .edf acquisition"""
scan_dir = UtilsTest.getDataset("test10")
scan = Factory.create_scan_object(scan_dir)
assert isinstance(scan, EDFTomoScan)
def test_one_nx():
"""Can we create a TomoScanBase from a .nx master file containing
one acquisition"""
file_name = "frm_edftomomill_oneentry.nx"
master_file = UtilsTest.getH5Dataset(file_name)
scan = Factory.create_scan_object(master_file)
assert isinstance(scan, HDF5TomoScan)
assert scan.path == os.path.dirname(master_file)
assert scan.master_file == master_file
assert scan.entry == "/entry"
def test_one_two_nx():
"""Can we create a TomoScanBase from a .nx master file containing
two acquisitions"""
master_file = UtilsTest.getH5Dataset("frm_edftomomill_twoentries.nx")
scan = Factory.create_scan_object(master_file)
assert isinstance(scan, HDF5TomoScan)
assert scan.path == os.path.dirname(master_file)
assert scan.master_file == master_file
assert scan.entry == "/entry0000"
def test_two_nx():
"""Can we create two TomoScanBase from a .nx master file containing
two acquisitions using the Factory"""
master_file = UtilsTest.getH5Dataset("frm_edftomomill_twoentries.nx")
scans = Factory.create_scan_objects(master_file)
assert len(scans) == 2
for scan, scan_entry in zip(scans, ("/entry0000", "/entry0001")):
assert isinstance(scan, HDF5TomoScan) is True
assert scan.path == os.path.dirname(master_file)
assert scan.master_file == master_file
assert scan.entry == scan_entry
def test_invalid_path():
"""Insure an error is raised if the path as no meaning"""
with pytest.raises(ValueError):
Factory.create_scan_object("toto")
with pytest.raises(ValueError):
Factory.create_scan_objects("toto")
with tempfile.TemporaryDirectory() as scan_dir:
with pytest.raises(ValueError):
Factory.create_scan_object(scan_dir)
def test_edf_scan_creation():
with tempfile.TemporaryDirectory() as folder:
scan_dir = os.path.join(folder, "my_scan")
MockEDF.mockScan(scanID=scan_dir, nRecons=10)
scan = Factory.create_scan_object(scan_path=scan_dir)
assert isinstance(scan, EDFTomoScan)
scans = Factory.create_scan_objects(scan_path=scan_dir)
assert len(scans) == 1
assert isinstance(scans[0], EDFTomoScan)
dict_ = scan.to_dict()
Factory.create_scan_object_frm_dict(dict_)
# test invalid dict
dict_[TomoScanBase.DICT_TYPE_KEY] = "tata"
with pytest.raises(ValueError):
Factory.create_scan_object_frm_dict(dict_)
del dict_[TomoScanBase.DICT_TYPE_KEY]
with pytest.raises(ValueError):
Factory.create_scan_object_frm_dict(dict_)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_serie.py 0000644 0236253 0006511 00000014112 00000000000 021014 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "10/01/2021"
import pytest
from tomoscan.serie import (
Serie,
sequences_to_series_from_sample_name,
check_serie_is_consistant_frm_sample_name,
serie_is_complete_from_group_size,
)
from tomoscan.esrf.mock import MockHDF5
from tomoscan.esrf.volume.edfvolume import EDFVolume
import tempfile
import os
@pytest.mark.parametrize("use_identifiers", [True, False])
def test_serie_scan(use_identifiers):
"""simple test of a serie"""
with tempfile.TemporaryDirectory() as dir:
serie1 = Serie(use_identifiers=use_identifiers)
assert isinstance(serie1.name, str)
serie2 = Serie("test", use_identifiers=use_identifiers)
assert serie2.name == "test"
assert len(serie2) == 0
scan1 = MockHDF5(dir, n_proj=2).scan
scan2 = MockHDF5(dir, n_proj=2).scan
serie3 = Serie("test", [scan1, scan2], use_identifiers=use_identifiers)
assert serie3.name == "test"
assert len(serie3) == 2
with pytest.raises(TypeError):
serie1.append("toto")
assert scan1 not in serie1
serie1.append(scan1)
assert len(serie1) == 1
assert scan1 in serie1
serie1.append(scan1)
serie1.remove(scan1)
serie1.name = "toto"
with pytest.raises(TypeError):
serie1.name = 12
with pytest.raises(TypeError):
serie1.remove(12)
serie1.append(scan2)
serie1.append(scan1)
assert len(serie1) == 3
serie1.remove(scan1)
assert len(serie1) == 2
serie1 == Serie("toto", (scan1, scan2), use_identifiers=use_identifiers)
assert scan1 in serie1
assert scan2 in serie1
identifiers_list = serie1.to_dict_of_str()
assert type(identifiers_list["objects"]) is list
assert len(identifiers_list["objects"]) == 2
for id_str in identifiers_list["objects"]:
assert isinstance(id_str, str)
assert serie1 != 12
@pytest.mark.parametrize("use_identifiers", [True, False])
def test_serie_volume(use_identifiers):
volume_1 = EDFVolume(folder="test")
volume_2 = EDFVolume()
volume_3 = EDFVolume(folder="test2")
volume_4 = EDFVolume()
serie1 = Serie("Volume serie", [volume_1, volume_2])
assert volume_1 in serie1
assert volume_2 in serie1
assert volume_3 not in serie1
assert volume_4 not in serie1
serie1.remove(volume_2)
serie1.append(volume_3)
identifiers_list = serie1.to_dict_of_str()
assert type(identifiers_list["objects"]) is list
assert len(identifiers_list["objects"]) == 2
for id_str in identifiers_list["objects"]:
assert isinstance(id_str, str)
serie2 = Serie.from_dict_of_str(serie1.to_dict_of_str())
assert len(serie2) == 2
with pytest.raises(TypeError):
Serie.from_dict_of_str({"name": "toto", "objects": (12, 13)})
def test_serie_utils():
"""test utils function from Serie"""
with tempfile.TemporaryDirectory() as tmp_path:
dir_1 = os.path.join(tmp_path, "scan1")
dir_2 = os.path.join(tmp_path, "scan2")
dir_3 = os.path.join(tmp_path, "scan3")
for dir_folder in (dir_1, dir_2, dir_3):
os.makedirs(dir_folder)
scan_s1_1 = MockHDF5(dir_1, n_proj=2, sample_name="toto").scan
scan_s1_2 = MockHDF5(dir_2, n_proj=2, sample_name="toto").scan
scan_s2_2 = MockHDF5(dir_3, n_proj=2, sample_name="titi").scan
found_series = sequences_to_series_from_sample_name(
(scan_s1_1, scan_s1_2, scan_s2_2)
)
assert len(found_series) == 2
with pytest.raises(TypeError):
sequences_to_series_from_sample_name([12])
for serie in found_series:
check_serie_is_consistant_frm_sample_name(serie)
with pytest.raises(ValueError):
check_serie_is_consistant_frm_sample_name(
Serie("test", [scan_s1_1, scan_s2_2])
)
dir_4 = os.path.join(tmp_path, "scan4")
dir_5 = os.path.join(tmp_path, "scan5")
scan_zserie_1 = MockHDF5(
dir_4, n_proj=2, sample_name="z-serie", group_size=2
).scan
scan_zserie_2 = MockHDF5(
dir_5, n_proj=2, sample_name="z-serie", group_size=2
).scan
assert not serie_is_complete_from_group_size(
[
scan_zserie_1,
]
)
assert serie_is_complete_from_group_size([scan_zserie_1, scan_zserie_2])
dir_6 = os.path.join(tmp_path, "scan6")
scan_zserie_3 = MockHDF5(
dir_6, n_proj=2, sample_name="z-serie", group_size=2
).scan
assert serie_is_complete_from_group_size(
[scan_zserie_1, scan_zserie_2, scan_zserie_3]
)
with pytest.raises(TypeError):
serie_is_complete_from_group_size([1, 2])
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_tomoobject.py 0000644 0236253 0006511 00000003112 00000000000 022050 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""test of the tomoscan.tomoobject module"""
import pytest
from tomoscan.tomoobject import TomoObject
def test_tomoobject():
obj = TomoObject()
with pytest.raises(NotImplementedError):
obj.from_identifier("test")
with pytest.raises(NotImplementedError):
obj.get_identifier()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_utils.py 0000644 0236253 0006511 00000007167 00000000000 021061 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "03/05/2022"
import pytest
from tomoscan.utils.geometry import BoundingBox1D, BoundingBox3D, _BoundingBox
def test_bounding_box_base():
bb = _BoundingBox(0, 1)
with pytest.raises(NotImplementedError):
bb.get_overlap(None)
def test_bounding_box_1D():
"""
check if BoundingBox1D is working properly
"""
# check overlaping
bb1 = BoundingBox1D(0.0, 1.0)
bb2 = BoundingBox1D(0.2, 1.0)
assert bb1.get_overlap(bb2) == BoundingBox1D(0.2, 1.0)
assert bb2.get_overlap(bb1) == BoundingBox1D(0.2, 1.0)
bb1 = BoundingBox1D(0.0, 1.0)
bb2 = BoundingBox1D(0.2, 0.8)
assert bb1.get_overlap(bb2) == BoundingBox1D(0.2, 0.8)
assert bb2.get_overlap(bb1) == BoundingBox1D(0.2, 0.8)
bb1 = BoundingBox1D(0.0, 1.0)
bb2 = BoundingBox1D(1.0, 1.2)
assert bb2.get_overlap(bb1) == BoundingBox1D(1.0, 1.0)
# check outside
bb1 = BoundingBox1D(0.0, 1.0)
bb2 = BoundingBox1D(2.0, 2.2)
assert bb2.get_overlap(bb1) is None
assert bb1.get_overlap(bb2) is None
# check on fully including in the other
bb1 = BoundingBox1D(0.0, 1.0)
bb2 = BoundingBox1D(0.1, 0.3)
assert bb2.get_overlap(bb1) == BoundingBox1D(0.1, 0.3)
assert bb1.get_overlap(bb2) == BoundingBox1D(0.1, 0.3)
with pytest.raises(TypeError):
bb1.get_overlap(None)
def test_bounding_box_3D():
"""
check if BoundingBox3D is working properly
"""
# check overlaping
bb1 = BoundingBox3D((0.0, -0.1, 0.0), [1.0, 0.8, 0.9])
bb2 = BoundingBox3D([0.2, 0.0, 0.1], (1.0, 2.0, 3.0))
assert bb1.get_overlap(bb2) == BoundingBox3D((0.2, 0.0, 0.1), (1.0, 0.8, 0.9))
assert bb2.get_overlap(bb1) == BoundingBox3D((0.2, 0.0, 0.1), (1.0, 0.8, 0.9))
# check outside
bb1 = BoundingBox3D((0.0, -0.1, 0.0), [1.0, 0.8, 0.9])
bb2 = BoundingBox3D([0.2, 0.0, -2.1], (1.0, 2.0, -1.0))
assert bb2.get_overlap(bb1) is None
assert bb1.get_overlap(bb2) is None
# check on fully including in the other
bb1 = BoundingBox3D((0.0, 0.1, 0.2), (1.0, 1.1, 1.2))
bb2 = BoundingBox3D((-2.0, -3.0, -4.0), (2.0, 2.0, 2.0))
assert bb2.get_overlap(bb1) == BoundingBox3D((0.0, 0.1, 0.2), (1.0, 1.1, 1.2))
assert bb1.get_overlap(bb2) == BoundingBox3D((0.0, 0.1, 0.2), (1.0, 1.1, 1.2))
with pytest.raises(TypeError):
bb1.get_overlap(None)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_validator.py 0000644 0236253 0006511 00000026537 00000000000 021710 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""Module containing validators"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "25/08/2021"
import tomoscan.validator
from tomoscan.test.utils import HDF5MockContext
import os
import pytest
import tempfile
import numpy
import h5py
import sys
frame_validators = (
tomoscan.validator.FlatEntryValidator,
tomoscan.validator.DarkEntryValidator,
tomoscan.validator.ProjectionEntryValidator,
)
@pytest.mark.parametrize("validator_cls", frame_validators)
def test_frames_validator(validator_cls):
"""Test frame validator on a complete dataset"""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
) as scan:
validator = validator_cls(scan)
assert validator.is_valid(), "scan contains all kind of frames"
@pytest.mark.parametrize("validator_cls", frame_validators)
def test_frames_validator_2(validator_cls):
"""Test frame validator on a empty dataset"""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=0,
n_ini_proj=0,
create_ini_dark=False,
create_ini_ref=False,
create_final_ref=False,
) as scan:
validator = validator_cls(scan)
assert not validator.is_valid(), "scan doesn't contains any projection"
@pytest.mark.parametrize("validator_cls", frame_validators)
def test_frames_validator_3(validator_cls):
"""Test frame validator on a dataset missing some projections"""
tomo_n = 20
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=tomo_n,
n_ini_proj=tomo_n - 1,
create_ini_dark=False,
create_ini_ref=False,
create_final_ref=False,
) as scan:
with h5py.File(scan.master_file, mode="a") as h5f:
entry = h5f[scan.entry]
entry.require_group("instrument").require_group("detector")[
"tomo_n"
] = tomo_n
validator = validator_cls(scan)
assert not validator.is_valid(), "scan doesn't contains tomo_n projections"
phase_retrieval_validators = (
tomoscan.validator.EnergyValidator,
tomoscan.validator.DistanceValidator,
tomoscan.validator.PixelValidator,
)
@pytest.mark.parametrize("validator_cls", phase_retrieval_validators)
def test_phase_retrieval_validator(validator_cls):
"""Test dark and flat validator on a complete dataset"""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
) as scan:
with h5py.File(scan.master_file, mode="a") as h5f:
entry_grp = h5f[scan.entry]
if "instrument/detector/x_pixel_size" in entry_grp:
del entry_grp["instrument/detector/x_pixel_size"]
if "instrument/detector/y_pixel_size" in entry_grp:
del entry_grp["instrument/detector/y_pixel_size"]
validator = validator_cls(scan)
assert (
not validator.is_valid()
), "scan have missing energy, distance and pixel size"
with h5py.File(scan.master_file, mode="a") as h5f:
beam_grp = h5f[scan.entry].require_group("beam")
if "incident_energy" in beam_grp:
del beam_grp["incident_energy"]
beam_grp["incident_energy"] = 1.0
beam_grp_2 = h5f[scan.entry].require_group("instrument/beam")
if "incident_energy" in beam_grp_2:
del beam_grp_2["incident_energy"]
beam_grp_2["incident_energy"] = 1.0
detector_grp = h5f[scan.entry].require_group("instrument/detector")
if "distance" in detector_grp:
del detector_grp["distance"]
detector_grp["distance"] = 1.0
detector_grp["x_pixel_size"] = 2.0
detector_grp["y_pixel_size"] = 1.0
validator.clear()
assert validator.is_valid(), "scan contains all information for phase retrieval"
frame_values_validators = (
tomoscan.validator.DarkDatasetValidator,
tomoscan.validator.FlatDatasetValidator,
tomoscan.validator.ProjectionDatasetValidator,
)
@pytest.mark.parametrize("validator_cls", frame_values_validators)
def test_frame_broken_vds(validator_cls):
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
create_ini_dark=True,
create_ini_ref=True,
create_final_ref=False,
) as scan:
validator = validator_cls(scan=scan, check_vds=True, check_values=False)
assert (
validator.is_valid()
), "if data is unchanged then validator should valid the entry"
validator.clear()
# modify 'data' dataset to set a virtual dataset with broken link (file does not exists)
with h5py.File(scan.master_file, mode="a") as h5f:
detector_grp = h5f[scan.entry]["instrument/detector"]
shape = detector_grp["data"].shape
del detector_grp["data"]
# create invalid VDS
layout = h5py.VirtualLayout(shape=shape, dtype="i4")
filename = "toto.h5"
vsource = h5py.VirtualSource(filename, "data", shape=shape)
layout[0 : shape[0]] = vsource
detector_grp.create_virtual_dataset("data", layout)
assert not validator.is_valid(), "should return broken dataset"
@pytest.mark.parametrize("validator_cls", frame_values_validators)
def test_frame_data_with_nan(validator_cls):
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
) as scan:
validator = validator_cls(scan=scan, check_vds=False, check_values=True)
assert (
validator.is_valid()
), "if data is unchanged then validor should valid the entry"
validator.clear()
# modify 'data' dataset to add nan values
with h5py.File(scan.master_file, mode="a") as h5f:
data = h5f[scan.entry]["instrument/detector/data"][()]
del h5f[scan.entry]["instrument/detector/data"]
data[:] = numpy.nan
h5f[scan.entry]["instrument/detector/data"] = data
assert not validator.is_valid(), "should return data contains nan"
high_level_validators = (
tomoscan.validator.BasicScanValidator,
tomoscan.validator.ReconstructionValidator,
)
@pytest.mark.parametrize("only_issue", (True, False))
@pytest.mark.parametrize("validator_cls", high_level_validators)
def test_high_level_validators_ok(capsys, validator_cls, only_issue):
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
distance=1.0,
energy=1.0,
) as scan:
validator = validator_cls(scan=scan)
assert validator.is_valid()
sys.stdout.write(validator.checkup(only_issues=only_issue))
captured = capsys.readouterr()
assert "No issue" in captured.out, "check print as been done on stdout"
validator.clear()
@pytest.mark.parametrize("only_issue", (True, False))
@pytest.mark.parametrize("check_values", (True, False))
@pytest.mark.parametrize("check_dark", (True, False))
@pytest.mark.parametrize("check_flat", (True, False))
@pytest.mark.parametrize("check_phase_retrieval", (True, False))
def test_reconstruction_validator_not_ok(
capsys, only_issue, check_values, check_dark, check_flat, check_phase_retrieval
):
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
) as scan:
validator = tomoscan.validator.ReconstructionValidator(
scan=scan,
check_values=check_values,
check_flat=check_flat,
check_dark=check_dark,
check_phase_retrieval=check_phase_retrieval,
)
sys.stdout.write(validator.checkup(only_issues=only_issue))
captured = capsys.readouterr()
if check_phase_retrieval:
assert "2 issues" in captured.out, "should have found 2 issues"
else:
"no issue" in captured.out, "should not have found any issue"
validator.clear()
# modify 'data' dataset to set nan inside. Now dark, flat and projection should fail
with h5py.File(scan.master_file, mode="a") as h5f:
data = h5f[scan.entry]["instrument/detector/data"][()]
del h5f[scan.entry]["instrument/detector/data"]
data[:] = numpy.nan
h5f[scan.entry]["instrument/detector/data"] = data
sys.stdout.write(validator.checkup(only_issues=only_issue))
captured = capsys.readouterr()
n_issues = 0
if check_phase_retrieval:
# there is no energy / distance
n_issues += 2
if check_values and check_flat:
# flat contains nan
n_issues += 1
if check_values and check_dark:
# dark contains nan
n_issues += 1
if check_values:
# projections contains nan
n_issues += 1
if n_issues == 0:
"no issue" in captured.out, "should not have found any issue"
else:
assert (
f"{n_issues} issues" in captured.out
), f"should have found {n_issues} issues"
def test_validatorbase():
"""Test the Validator base class API"""
validator = tomoscan.validator.ValidatorBase()
with pytest.raises(NotImplementedError):
validator.is_valid()
with pytest.raises(NotImplementedError):
validator.run()
with pytest.raises(NotImplementedError):
validator.clear()
def test_is_valid_for_reconstruction():
"""test is_valid_for_reconstruction function."""
with HDF5MockContext(
scan_path=os.path.join(tempfile.mkdtemp(), "scan_test"),
n_proj=10,
n_ini_proj=10,
distance=1.0,
energy=1.0,
) as scan:
assert tomoscan.validator.is_valid_for_reconstruction(
scan=scan, need_phase_retrieval=True, check_values=True
), "This dataset should be valid for reconstruction with phase retrieval"
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1670330888.0
tomoscan-1.2.2/tomoscan/test/test_version.py 0000644 0236253 0006511 00000002736 00000000000 021403 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "22/06/2021"
import tomoscan.version
def test_version():
assert isinstance(tomoscan.version.version, str), "version should be a str"
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_volume_base.py 0000644 0236253 0006511 00000005534 00000000000 022216 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""Module containing validators"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "07/07/2022"
from tomoscan.volumebase import VolumeBase
import pytest
def test_volume_base():
"""Test VolumeBase file"""
class UnplemetnedVolumeBase(VolumeBase):
def deduce_data_and_metadata_urls(self, url):
return None, None
volume_base = UnplemetnedVolumeBase()
with pytest.raises(NotImplementedError):
volume_base.example_defined_from_str_identifier()
with pytest.raises(NotImplementedError):
volume_base.get_identifier()
with pytest.raises(NotImplementedError):
VolumeBase.from_identifier("")
with pytest.raises(NotImplementedError):
volume_base.save_data()
with pytest.raises(NotImplementedError):
volume_base.save_metadata()
with pytest.raises(NotImplementedError):
volume_base.save()
with pytest.raises(NotImplementedError):
volume_base.load_data()
with pytest.raises(NotImplementedError):
volume_base.load_metadata()
with pytest.raises(NotImplementedError):
volume_base.load()
with pytest.raises(NotImplementedError):
volume_base.browse_data_files()
with pytest.raises(NotImplementedError):
volume_base.browse_metadata_files()
with pytest.raises(NotImplementedError):
volume_base.browse_data_urls()
volume_base.position = (0, 1, 2)
assert isinstance(volume_base.position, tuple)
assert volume_base.position == (0, 1, 2)
volume_base.pixel_size = 12.3
assert volume_base.pixel_size == 12.3
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/test_volume_utils.py 0000644 0236253 0006511 00000014050 00000000000 022435 0 ustar 00payno soft 0000000 0000000 import os
import numpy
import pytest
from copy import deepcopy
from tomoscan.utils.volume import concatenate, update_metadata
from tomoscan.esrf.volume.edfvolume import EDFVolume
from tomoscan.esrf.volume.hdf5volume import HDF5Volume
from tomoscan.esrf.volume.jp2kvolume import JP2KVolume, has_minimal_openjpeg
from tomoscan.esrf.volume.tiffvolume import TIFFVolume, has_tifffile
_clases_to_test = [EDFVolume, HDF5Volume]
if has_minimal_openjpeg:
_clases_to_test.append(JP2KVolume)
if has_tifffile:
_clases_to_test.append(TIFFVolume)
def test_concatenate_volume_errors():
"""test some error raised by tomoscan.utils.volume.concatenate function"""
vol = HDF5Volume(
file_path="toto",
data_path="test",
)
with pytest.raises(TypeError):
concatenate(output_volume=1, volumes=(), axis=1),
with pytest.raises(TypeError):
concatenate(output_volume=vol, volumes=(), axis="1")
with pytest.raises(TypeError):
concatenate(output_volume=vol, volumes=(), axis="1")
with pytest.raises(ValueError):
concatenate(output_volume=vol, volumes=(), axis=6)
with pytest.raises(ValueError):
concatenate(output_volume=vol, volumes=(1,), axis=1)
with pytest.raises(TypeError):
concatenate(output_volume=vol, volumes="toto", axis=1)
@pytest.mark.parametrize("axis", (0, 1, 2))
@pytest.mark.parametrize("volume_class", _clases_to_test)
def test_concatenate_volume(tmp_path, volume_class, axis):
"""
test concatenation of 3 volumes into a single one
"""
# create folder to save data (and debug)
raw_data_dir = tmp_path / "raw_data"
raw_data_dir.mkdir()
output_dir = tmp_path / "output_dir"
output_dir.mkdir()
param_set_1 = {
"data": numpy.ones((100, 100, 100), dtype=numpy.uint16),
"metadata": {
"this": {
"is": {"metadata": 1},
},
},
}
param_set_2 = {
"data": numpy.arange(100 * 100 * 100, dtype=numpy.uint16).reshape(
100, 100, 100
),
"metadata": {
"this": {
"is": {"metadata": 2},
"isn't": {
"something": 12.3,
},
},
},
}
param_set_3 = {
"data": numpy.zeros((100, 100, 100), dtype=numpy.uint16),
"metadata": {
"yet": {
"another": {
"metadata": 12,
},
},
},
}
volumes = []
param_sets = (param_set_1, param_set_2, param_set_3)
for i_vol, vol_params in enumerate(param_sets):
if volume_class == HDF5Volume:
vol_params.update(
{
"file_path": os.path.join(raw_data_dir, f"volume_{i_vol}.hdf5"),
"data_path": "volume",
}
)
else:
vol_params.update({"folder": os.path.join(raw_data_dir, f"volume_{i_vol}")})
volume = volume_class(**vol_params)
volume.save()
volumes.append(volume)
volume.data = None
volume.metadata = None
volumes = tuple(volumes)
# apply concatenation
if volume_class == HDF5Volume:
final_volume = HDF5Volume(
file_path=os.path.join(output_dir, "final_vol.hdf5"),
data_path="volume",
)
else:
final_volume = volume_class(
folder=os.path.join(output_dir, "final_vol"),
)
concatenate(output_volume=final_volume, volumes=volumes, axis=axis)
if axis == 0:
expected_final_shape = (300, 100, 100)
elif axis == 1:
expected_final_shape = (100, 300, 100)
elif axis == 2:
expected_final_shape = (100, 100, 300)
else:
raise RuntimeError("axis should be in (0, 1, 2)")
assert final_volume.data is None
assert final_volume.get_volume_shape() == expected_final_shape
final_volume.load()
assert "this" in final_volume.metadata
[volume.load() for volume in volumes]
if axis == 0:
numpy.testing.assert_almost_equal(final_volume.data[0:100], volumes[0].data)
numpy.testing.assert_almost_equal(final_volume.data[100:200], volumes[1].data)
numpy.testing.assert_almost_equal(final_volume.data[200:300], volumes[2].data)
elif axis == 1:
numpy.testing.assert_almost_equal(final_volume.data[:, 0:100], volumes[0].data)
numpy.testing.assert_almost_equal(
final_volume.data[:, 100:200], volumes[1].data
)
numpy.testing.assert_almost_equal(
final_volume.data[:, 200:300], volumes[2].data
)
elif axis == 2:
numpy.testing.assert_almost_equal(
final_volume.data[:, :, 0:100], volumes[0].data
)
numpy.testing.assert_almost_equal(
final_volume.data[:, :, 100:200], volumes[1].data
)
numpy.testing.assert_almost_equal(
final_volume.data[:, :, 200:300], volumes[2].data
)
final_volume.overwrite = False
with pytest.raises(OSError):
concatenate(output_volume=final_volume, volumes=volumes, axis=axis)
final_volume.overwrite = True
concatenate(output_volume=final_volume, volumes=volumes, axis=axis)
def test_update_metadata():
ddict_1 = {
"key": {
"sub_key_1": "toto",
"sub_key_2": "tata",
},
"second_key": "test",
}
ddict_2 = {
"key": {
"sub_key_1": "test",
"sub_key_3": "test",
},
"third_key": "test",
}
assert update_metadata(deepcopy(ddict_1), deepcopy(ddict_2)) == {
"key": {
"sub_key_1": "test",
"sub_key_2": "tata",
"sub_key_3": "test",
},
"second_key": "test",
"third_key": "test",
}
assert update_metadata(deepcopy(ddict_2), deepcopy(ddict_1)) == {
"key": {
"sub_key_1": "toto",
"sub_key_2": "tata",
"sub_key_3": "test",
},
"second_key": "test",
"third_key": "test",
}
with pytest.raises(TypeError):
update_metadata(1, 2)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/test/utils.py 0000644 0236253 0006511 00000020706 00000000000 020014 0 ustar 00payno soft 0000000 0000000 #!/usr/bin/python
# coding: utf-8
#
# Project: Azimuthal integration
# https://github.com/pyFAI/pyFAI
#
# Copyright (C) 2015-2022 European Synchrotron Radiation Facility, Grenoble, France
#
# Principal author: Jérôme Kieffer (Jerome.Kieffer@ESRF.eu)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
__doc__ = """test modules for pyFAI."""
__authors__ = ["Jérôme Kieffer", "Valentin Valls", "Henri Payno"]
__license__ = "MIT"
__copyright__ = "European Synchrotron Radiation Facility, Grenoble, France"
__date__ = "07/02/2017"
import os
import shutil
from urllib.request import urlopen, ProxyHandler, build_opener
from tomoscan.esrf.mock import MockHDF5
import logging
import tempfile
try:
from contextlib import AbstractContextManager
except ImportError:
from tomwer.third_party.contextlib import AbstractContextManager
logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger(__name__)
class UtilsTest(object):
"""
Static class providing useful stuff for preparing tests.
"""
timeout = 100 # timeout in seconds for downloading datasets.tar.bz2
def __init__(self):
self.installed = False
@classmethod
def dataDownloaded(cls, archive_folder, archive_file):
return cls.dataIsHere(archive_folder=archive_folder) or cls.dataIsDownloaded(
archive_file=archive_file
)
@classmethod
def dataIsHere(cls, archive_folder):
return os.path.isdir(archive_folder)
@classmethod
def dataIsDownloaded(cls, archive_file):
return os.path.isfile(archive_file)
@classmethod
def getH5Dataset(cls, folderID):
path = os.path.abspath(
os.path.join(cls.getDatasets(name="h5_datasets"), folderID)
)
if os.path.exists(path):
return path
else:
raise RuntimeError("Coul'd find folder containing scan %s" % folderID)
@classmethod
def getOrangeTestFile(cls, folderID):
path = os.path.abspath(os.path.join(cls.getOrangeTestFiles(), folderID))
if os.path.isfile(path):
return path
else:
raise RuntimeError("Coul'd find folder containing scan %s" % folderID)
@classmethod
def getOrangeTestFiles(cls):
return cls.getDatasets(name="orangetestfiles")
@classmethod
def getDataset(cls, name):
return cls.getDatasets(name=name)
@classmethod
def getDatasets(cls, name="datasets"):
"""
Downloads the requested image from Forge.EPN-campus.eu
@param: name of the image.
For the RedMine forge, the filename contains a directory name that is removed
@return: full path of the locally saved file
"""
archive_file = name + ".tar.bz2"
archive_folder = "".join((os.path.dirname(__file__), "/" + name + "/"))
archive_file = os.path.join(archive_folder, archive_file)
# download if needed
if not cls.dataDownloaded(
archive_folder=archive_folder, archive_file=archive_file
):
DownloadDataset(
dataset=os.path.basename(archive_file),
output_folder=archive_folder,
timeout=cls.timeout,
)
if not os.path.isfile(archive_file):
raise RuntimeError(
"Could not automatically "
f"download test images {archive_file}.\n If you are behind a firewall, "
"please set both environment variable http_proxy and https_proxy. "
"This even works under windows ! \n "
f"Otherwise please try to download the images manually from {url_base} / {archive_file}"
)
# decompress if needed
if os.path.isfile(archive_file):
logger.info("decompressing %s." % archive_file)
outdir = "".join((os.path.dirname(__file__)))
shutil.unpack_archive(archive_file, extract_dir=outdir, format="bztar")
os.remove(archive_file)
else:
logger.info("not trying to decompress it")
return archive_folder
@classmethod
def hasInternalTest(cls, dataset):
"""
The id of the internal test is to have some large scan accessible
which are stored locally. This should be used only for unit test that
can be skipped
"""
if "TOMWER_ADDITIONAL_TESTS_DIR" not in os.environ:
return False
else:
dir = os.path.join(os.environ["TOMWER_ADDITIONAL_TESTS_DIR"], dataset)
return os.path.isdir(dir)
@classmethod
def getInternalTestDir(cls, dataset):
if cls.hasInternalTest(dataset) is False:
return None
else:
return os.path.join(os.environ["TOMWER_ADDITIONAL_TESTS_DIR"], dataset)
url_base = "http://www.edna-site.org/pub/tomoscan/"
def DownloadDataset(dataset, output_folder, timeout, unpack=False):
# create if needed path scan
url = url_base + dataset
logger.info("Trying to download scan %s, timeout set to %ss", dataset, timeout)
dictProxies = {}
if "http_proxy" in os.environ:
dictProxies["http"] = os.environ["http_proxy"]
dictProxies["https"] = os.environ["http_proxy"]
if "https_proxy" in os.environ:
dictProxies["https"] = os.environ["https_proxy"]
if dictProxies:
proxy_handler = ProxyHandler(dictProxies)
opener = build_opener(proxy_handler).open
else:
opener = urlopen
logger.info("wget %s" % url)
data = opener(url, data=None, timeout=timeout).read()
logger.info("Image %s successfully downloaded." % dataset)
if not os.path.isdir(output_folder):
os.mkdir(output_folder)
try:
archive_folder = os.path.join(output_folder, os.path.basename(dataset))
with open(archive_folder, "wb") as outfile:
outfile.write(data)
except IOError:
raise IOError(
"unable to write downloaded \
data to disk at %s"
% archive_folder
)
if unpack is True:
shutil.unpack_archive(archive_folder, extract_dir=output_folder, format="bztar")
os.remove(archive_folder)
class MockContext(AbstractContextManager):
def __init__(self, output_folder):
self._output_folder = output_folder
if self._output_folder is None:
tempfile.mkdtemp()
self._output_folder_existed = False
elif not os.path.exists(self._output_folder):
os.makedirs(self._output_folder)
self._output_folder_existed = False
else:
self._output_folder_existed = True
super().__init__()
def __init_subclass__(cls, **kwargs):
mock_class = kwargs.get("mock_class", None)
if mock_class is None:
raise KeyError("mock_class should be provided to the " "metaclass")
cls._mock_class = mock_class
def __exit__(self, exc_type, exc_val, exc_tb):
if self._output_folder_existed:
shutil.rmtree(self._output_folder)
class HDF5MockContext(MockContext, mock_class=MockHDF5):
"""
Util class to provide a context with a new Mock HDF5 file
"""
def __init__(self, scan_path, n_proj, **kwargs):
super().__init__(output_folder=os.path.dirname(scan_path))
self._n_proj = n_proj
self._mocks_params = kwargs
self._scan_path = scan_path
def __enter__(self):
return MockHDF5(
scan_path=self._scan_path, n_proj=self._n_proj, **self._mocks_params
).scan
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/tomoobject.py 0000644 0236253 0006511 00000004443 00000000000 020042 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
# Copyright (C) 2016- 2020 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
#############################################################################
"""Module containing the TomoObject class. Parent class of any tomo object"""
__authors__ = ["H.Payno"]
__license__ = "MIT"
__date__ = "27/01/2022"
from typing import Union, Optional
from .identifier import BaseIdentifier
from tomoscan.utils import BoundingBox1D
class TomoObject:
"""Parent class of all tomographic object in tomoscan"""
@staticmethod
def from_identifier(identifier: Union[str, BaseIdentifier]):
"""Return the Dataset from a identifier"""
raise NotImplementedError("Base class")
def get_identifier(self) -> BaseIdentifier:
"""dataset unique identifier. Can be for example a hdf5 and
en entry from which the dataset can be rebuild"""
raise NotImplementedError("Base class")
def get_bounding_box(self, axis: Optional[Union[str, int]] = None) -> BoundingBox1D:
"""
Return the bounding box covered by the Tomo object
axis is expected to be in (0, 1, 2) or (x==0, y==1, z==2)
"""
raise NotImplementedError("Base class")
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.169433
tomoscan-1.2.2/tomoscan/unitsystem/ 0000755 0236253 0006511 00000000000 00000000000 017542 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/unitsystem/__init__.py 0000644 0236253 0006511 00000003171 00000000000 021655 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module for the unit system"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "09/02/2021"
from .unit import Unit # noqa F401
from .energysystem import EnergySI # noqa F401
from .metricsystem import MetricSystem # noqa F401
from .timesystem import TimeSystem # noqa F401
from .electriccurrentsystem import ElectricCurrentSystem # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/unitsystem/electriccurrentsystem.py 0000644 0236253 0006511 00000004621 00000000000 024561 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2020 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = [
"H. Payno",
]
__license__ = "MIT"
__date__ = "28/04/2022"
from tomoscan.unitsystem.unit import Unit
class ElectricCurrentSystem(Unit):
"""Unit system for electric potential SI units (volt)"""
AMPERE = 1.0
MILLIAMPERE = AMPERE / 1000.0
KILOAMPERE = AMPERE * 10e3
@classmethod
def from_str(cls, value: str):
assert isinstance(value, str)
if value.lower() in ("a", "ampere"):
return ElectricCurrentSystem.AMPERE
elif value.lower() in ("ma", "milliampere"):
return ElectricCurrentSystem.MILLIAMPERE
elif value.lower() in ("ka", "kiloampere"):
return ElectricCurrentSystem.KILOAMPERE
else:
raise ValueError("Cannot convert: %s" % value)
def __str__(self):
if self == ElectricCurrentSystem.AMPERE:
return "A"
elif self == ElectricCurrentSystem.MILLIAMPERE:
return "mA"
elif self == ElectricCurrentSystem.KILOAMPERE:
return "kA"
else:
raise ValueError("Cannot convert: to voltage system")
ampere = ElectricCurrentSystem.AMPERE
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/unitsystem/energysystem.py 0000644 0236253 0006511 00000006311 00000000000 022653 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2020 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["P. Paleo", "H. Payno"]
__license__ = "MIT"
__date__ = "09/02/2022"
from tomoscan.unitsystem.unit import Unit
# Constants
_elementary_charge_coulomb = 1.602176634e-19
_joule_si = 1.0
class EnergySI(Unit):
"""Util enum for energy in SI units (Joules)"""
JOULE = _joule_si
ELEMCHARGE = _elementary_charge_coulomb
ELECTRONVOLT = _elementary_charge_coulomb
KILOELECTRONVOLT = _elementary_charge_coulomb * 1e3
MEGAELECTRONVOLT = _elementary_charge_coulomb * 1e6
GIGAELECTRONVOLT = _elementary_charge_coulomb * 1e9
KILOJOULE = 1e3 * _joule_si
@classmethod
def from_str(cls, value: str):
if value.lower() in ("j", "joule"):
return EnergySI.JOULE
elif value.lower() in ("kj", "kilojoule"):
return EnergySI.KILOJOULE
elif value.lower() in ("ev", "electronvolt"):
return EnergySI.ELECTRONVOLT
elif value.lower() in ("kev", "kiloelectronvolt"):
return EnergySI.KILOELECTRONVOLT
elif value.lower() in ("mev", "megaelectronvolt"):
return EnergySI.MEGAELECTRONVOLT
elif value.lower() in ("gev", "gigaelectronvolt"):
return EnergySI.GIGAELECTRONVOLT
elif value.lower() in ("e", "qe"):
return EnergySI.ELEMCHARGE
else:
raise ValueError("Cannot convert: %s" % value)
def __str__(self):
if self is EnergySI.JOULE:
return "J"
elif self is EnergySI.KILOJOULE:
return "kJ"
elif self is EnergySI.ELECTRONVOLT:
return "eV"
elif self is EnergySI.KILOELECTRONVOLT:
return "keV"
elif self is EnergySI.MEGAELECTRONVOLT:
return "meV"
elif self is EnergySI.GIGAELECTRONVOLT:
return "geV"
elif self is EnergySI.ELEMCHARGE:
# in fact will never be called because EnergySI.ELEMCHARGE is EnergySI.ELECTRONVOLT
return "e"
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/unitsystem/metricsystem.py 0000644 0236253 0006511 00000014617 00000000000 022655 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["P. Paleo", "H. Payno"]
__license__ = "MIT"
__date__ = "09/02/2022"
from silx.utils.deprecation import deprecated_warning
from tomoscan.unitsystem.energysystem import (
EnergySI,
) # noqa F401 kept for bacward compatibility
from tomoscan.unitsystem.unit import Unit
# Default units:
# - lenght: meter (m)
# - energy: kilo Electronvolt (keV)
_meter = 1.0
_kev = 1.0
class MetricSystem(Unit):
"""Util enum to retrieve metric"""
METER = _meter
m = _meter
CENTIMETER = _meter / 100.0
MILLIMETER = _meter / 1000.0
MICROMETER = _meter * 1e-6
NANOMETER = _meter * 1e-9
KILOELECTRONVOLT = _kev
ELECTRONVOLT = _kev * 1e-3
JOULE = _kev / EnergySI.KILOELECTRONVOLT.value
KILOJOULE = _kev / EnergySI.KILOELECTRONVOLT.value * 1e3
@classmethod
def from_str(cls, value: str):
assert isinstance(value, str)
if value.lower() in ("m", "meter"):
return MetricSystem.METER
elif value.lower() in ("cm", "centimeter"):
return MetricSystem.CENTIMETER
elif value.lower() in ("mm", "millimeter"):
return MetricSystem.MILLIMETER
elif value.lower() in ("um", "micrometer", "microns"):
return MetricSystem.MICROMETER
elif value.lower() in ("nm", "nanometer"):
deprecated_warning(
"Function",
"MetricSystem.from_str for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.from_str",
since_version="0.8.0",
)
return MetricSystem.NANOMETER
elif value.lower() in ("kev", "kiloelectronvolt"):
deprecated_warning(
"Function",
"MetricSystem.from_str for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.from_str",
since_version="0.8.0",
)
return MetricSystem.KILOELECTRONVOLT
elif value.lower() in ("ev", "electronvolt"):
deprecated_warning(
"Function",
"MetricSystem.from_str for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.from_str",
since_version="0.8.0",
)
return MetricSystem.ELECTRONVOLT
elif value.lower() in ("j", "joule"):
deprecated_warning(
"Function",
"MetricSystem.from_str for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.from_str",
since_version="0.8.0",
)
return MetricSystem.JOULE
elif value.lower() in ("kj", "kilojoule"):
deprecated_warning(
"Function",
"MetricSystem.from_str for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.from_str",
since_version="0.8.0",
)
return MetricSystem.KILOJOULE
else:
raise ValueError("Cannot convert: %s" % value)
def __str__(self):
if self == MetricSystem.METER:
return "m"
elif self == MetricSystem.CENTIMETER:
return "cm"
elif self == MetricSystem.MILLIMETER:
return "mm"
elif self == MetricSystem.MICROMETER:
return "um"
elif self == MetricSystem.NANOMETER:
return "nm"
elif self == MetricSystem.KILOELECTRONVOLT:
deprecated_warning(
"Function",
"MetricSystem.__str__ for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.__str__",
since_version="0.8.0",
)
return "keV"
elif self == MetricSystem.ELECTRONVOLT:
deprecated_warning(
"Function",
"MetricSystem.__str__ for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.__str__",
since_version="0.8.0",
)
return "eV"
elif self == MetricSystem.JOULE:
deprecated_warning(
"Function",
"MetricSystem.__str__ for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.__str__",
since_version="0.8.0",
)
return "J"
elif self == MetricSystem.KILOJOULE:
deprecated_warning(
"Function",
"MetricSystem.__str__ for energies",
reason="Must be part of EnergySI instead",
replacement="EnergySI.__str__",
since_version="0.8.0",
)
return "kJ"
else:
raise ValueError(f"Cannot convert: {self}")
m = MetricSystem.METER
meter = MetricSystem.METER
centimeter = MetricSystem.CENTIMETER
cm = centimeter
millimeter = MetricSystem.MILLIMETER
mm = MetricSystem.MILLIMETER
micrometer = MetricSystem.MICROMETER
nanometer = MetricSystem.NANOMETER
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/unitsystem/timesystem.py 0000644 0236253 0006511 00000006137 00000000000 022326 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2020 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["P. Paleo", "H. Payno"]
__license__ = "MIT"
__date__ = "09/02/2022"
from tomoscan.unitsystem.unit import Unit
class TimeSystem(Unit):
"""Unit system for time in SI units (seconds)"""
SECOND = 1.0
MINUTE = 60.0 * SECOND
HOUR = 60.0 * MINUTE
DAY = 24.0 * HOUR
MILLI_SECOND = SECOND * 1e-3
MICRO_SECOND = SECOND * 1e-6
NANO_SECOND = SECOND * 1e-9
@classmethod
def from_str(cls, value: str):
assert isinstance(value, str)
if value.lower() in ("s", "second"):
return TimeSystem.SECOND
elif value.lower() in ("m", "minute"):
return TimeSystem.MINUTE
elif value.lower() in ("h", "hour"):
return TimeSystem.HOUR
elif value.lower() in (
"d",
"day",
):
return TimeSystem.DAY
elif value.lower() in ("ns", "nanosecond", "nano-second"):
return TimeSystem.NANO_SECOND
elif value.lower() in ("microsecond", "micro-second"):
return TimeSystem.MICRO_SECOND
elif value.lower() in ("millisecond", "milli-second"):
return TimeSystem.MILLI_SECOND
else:
raise ValueError("Cannot convert: %s" % value)
def __str__(self):
if self == TimeSystem.SECOND:
return "second"
elif self == TimeSystem.MINUTE:
return "minute"
elif self == TimeSystem.HOUR:
return "hour"
elif self == TimeSystem.DAY:
return "day"
elif self == TimeSystem.MILLI_SECOND:
return "millisecond"
elif self == TimeSystem.MICRO_SECOND:
return "microsecond"
elif self == TimeSystem.NANO_SECOND:
return "nanosecond"
else:
raise ValueError("Cannot convert: to time system")
second = TimeSystem.SECOND
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1669995108.0
tomoscan-1.2.2/tomoscan/unitsystem/unit.py 0000644 0236253 0006511 00000003466 00000000000 021104 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2020 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["P. Paleo", "H. Payno"]
__license__ = "MIT"
__date__ = "09/02/2022"
from silx.utils.enum import Enum as _Enum
class Unit(_Enum):
"""Base class for all Unit. Children class are also expected to inherit from silx Enum class"""
@classmethod
def from_str(cls, value: str):
raise NotImplementedError("Base class")
@classmethod
def from_value(cls, value):
if isinstance(value, str):
return cls.from_str(value=value)
else:
return super().from_value(value=value)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/unitsystem/voltagesystem.py 0000644 0236253 0006511 00000003671 00000000000 023031 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2020 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["P. Paleo", "H. Payno"]
__license__ = "MIT"
__date__ = "15/02/2022"
from tomoscan.unitsystem.unit import Unit
class VoltageSystem(Unit):
"""Unit system for electric potential SI units (volt)"""
VOLT = 1.0
@classmethod
def from_str(cls, value: str):
assert isinstance(value, str)
if value.lower() in ("v", "volt"):
return VoltageSystem.VOLT
else:
raise ValueError("Cannot convert: %s" % value)
def __str__(self):
if self == VoltageSystem.VOLT:
return "volt"
else:
raise ValueError("Cannot convert: to voltage system")
volt = VoltageSystem.VOLT
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.169433
tomoscan-1.2.2/tomoscan/utils/ 0000755 0236253 0006511 00000000000 00000000000 016456 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/utils/__init__.py 0000644 0236253 0006511 00000000257 00000000000 020573 0 ustar 00payno soft 0000000 0000000 from .geometry import BoundingBox1D, BoundingBox3D, get_subvolume_shape # noqa F401
from .decorator import docstring # noqa F401
from .io import SharedLockPool # noqa F401
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/utils/decorator.py 0000644 0236253 0006511 00000002142 00000000000 021011 0 ustar 00payno soft 0000000 0000000 import functools
def _docstring(dest, origin):
"""Implementation of docstring decorator.
It patches dest.__doc__.
"""
if not isinstance(dest, type) and isinstance(origin, type):
# func is not a class, but origin is, get the method with the same name
try:
origin = getattr(origin, dest.__name__)
except AttributeError:
raise ValueError("origin class has no %s method" % dest.__name__)
dest.__doc__ = origin.__doc__
return dest
def docstring(origin):
"""Decorator to initialize the docstring from another source.
This is useful to duplicate a docstring for inheritance and composition.
If origin is a method or a function, it copies its docstring.
If origin is a class, the docstring is copied from the method
of that class which has the same name as the method/function
being decorated.
:param origin:
The method, function or class from which to get the docstring
:raises ValueError:
If the origin class has not method n case the
"""
return functools.partial(_docstring, origin=origin)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1672907679.0
tomoscan-1.2.2/tomoscan/utils/geometry.py 0000644 0236253 0006511 00000005640 00000000000 020670 0 ustar 00payno soft 0000000 0000000 import numpy
class _BoundingBox:
def __init__(self, v1, v2):
if not numpy.isscalar(v1):
v1 = tuple(v1)
if not numpy.isscalar(v2):
v2 = tuple(v2)
self._min = min(v1, v2)
self._max = max(v1, v2)
@property
def min(self) -> float:
return self._min
@property
def max(self) -> float:
return self._max
def __str__(self):
return f"({self.min}, {self.max})"
def __eq__(self, other):
if not isinstance(other, _BoundingBox):
return False
else:
return self.min == other.min and self.max == other.max
def get_overlap(self, other_bb):
raise NotImplementedError("Base class")
class BoundingBox1D(_BoundingBox):
def get_overlap(self, other_bb):
if not isinstance(other_bb, BoundingBox1D):
raise TypeError(f"Can't compare a {BoundingBox1D} with {type(other_bb)}")
if (
(self.max >= other_bb.min and self.min <= other_bb.max)
or (other_bb.max >= self.min and other_bb.min <= self.max)
or (other_bb.min <= self.min and other_bb.max >= self.max)
):
return BoundingBox1D(
max(self.min, other_bb.min), min(self.max, other_bb.max)
)
else:
return None
def __eq__(self, other):
if isinstance(other, (tuple, list)):
return len(other) == 2 and self.min == other[0] and self.max == other[1]
else:
return super().__eq__(other)
class BoundingBox3D(_BoundingBox):
def get_overlap(self, other_bb):
if not isinstance(other_bb, BoundingBox3D):
raise TypeError(f"Can't compare a {BoundingBox3D} with {type(other_bb)}")
self_bb_0 = BoundingBox1D(self.min[0], self.max[0])
self_bb_1 = BoundingBox1D(self.min[1], self.max[1])
self_bb_2 = BoundingBox1D(self.min[2], self.max[2])
other_bb_0 = BoundingBox1D(other_bb.min[0], other_bb.max[0])
other_bb_1 = BoundingBox1D(other_bb.min[1], other_bb.max[1])
other_bb_2 = BoundingBox1D(other_bb.min[2], other_bb.max[2])
overlap_0 = self_bb_0.get_overlap(other_bb_0)
overlap_1 = self_bb_1.get_overlap(other_bb_1)
overlap_2 = self_bb_2.get_overlap(other_bb_2)
if overlap_0 is not None and overlap_1 is not None and overlap_2 is not None:
return BoundingBox3D(
(overlap_0.min, overlap_1.min, overlap_2.min),
(overlap_0.max, overlap_1.max, overlap_2.max),
)
def get_subvolume_shape(chunk, volume_shape):
"""
Get the shape of a sub-volume to extract in a volume.
:param chunk: tuple of slice
:param volume_shape: tuple of int
"""
shape = []
for c, v in zip(chunk, volume_shape):
start = c.start or 0
end = c.stop or v
if end < 0:
end += v
shape.append(end - start)
return tuple(shape)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/utils/hdf5.py 0000644 0236253 0006511 00000005226 00000000000 017663 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "23/02/2022"
import contextlib
import h5py
from tomoscan.io import HDF5File
try:
import hdf5plugin # noqa F401
except ImportError:
pass
from silx.io.url import DataUrl
class _BaseReader(contextlib.AbstractContextManager):
def __init__(self, url: DataUrl):
if not isinstance(url, DataUrl):
raise TypeError(f"url should be an instance of DataUrl. Not {type(url)}")
if url.scheme() not in ("silx", "h5py"):
raise ValueError("Valid scheme are silx and h5py")
if url.data_slice() is not None:
raise ValueError(
"Data slices are not managed. Data path should "
"point to a bliss node (h5py.Group)"
)
self._url = url
self._file_handler = None
def __exit__(self, *exc):
return self._file_handler.close()
class DatasetReader(_BaseReader):
"""Context manager used to read a bliss node"""
def __enter__(self):
self._file_handler = HDF5File(filename=self._url.file_path(), mode="r")
entry = self._file_handler[self._url.data_path()]
if not isinstance(entry, h5py.Dataset):
raise ValueError(
"Data path ({}) should point to a dataset (h5py.Dataset)".format(
self._url.path()
)
)
return entry
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/utils/io.py 0000644 0236253 0006511 00000002750 00000000000 017443 0 ustar 00payno soft 0000000 0000000 from contextlib import contextmanager, ExitStack
import threading
class SharedLockPool:
"""
Allows to acquire locks identified by name (hashable type) recursively.
"""
def __init__(self):
self.__locks = {}
self.__locks_mutex = threading.Semaphore(value=1)
def __len__(self):
return len(self.__locks)
@property
def names(self):
return list(self.__locks.keys())
@contextmanager
def _modify_locks(self):
self.__locks_mutex.acquire()
try:
yield self.__locks
finally:
self.__locks_mutex.release()
@contextmanager
def acquire(self, name):
with self._modify_locks() as locks:
lock = locks.get(name, None)
if lock is None:
locks[name] = lock = threading.RLock()
lock.acquire()
try:
yield
finally:
lock.release()
with self._modify_locks() as locks:
if name in locks:
locks.pop(name)
@contextmanager
def acquire_context_creation(self, name, contextmngr, *args, **kwargs):
"""
Acquire lock only during context creation.
This can be used for example to protect the opening of a file
but not hold the lock while the file is open.
"""
with ExitStack() as stack:
with self.acquire(name):
ret = stack.enter_context(contextmngr(*args, **kwargs))
yield ret
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/utils/volume.py 0000644 0236253 0006511 00000021250 00000000000 020337 0 ustar 00payno soft 0000000 0000000 import os
import h5py
import numpy
import logging
from tomoscan.volumebase import VolumeBase
from tomoscan.esrf.volume import HDF5Volume
from tomoscan.io import HDF5File
from tomoscan.utils.hdf5 import DatasetReader
from collections.abc import Mapping
_logger = logging.getLogger(__name__)
def concatenate(output_volume: VolumeBase, volumes: tuple, axis: int) -> None:
"""
Function to do 'raw' concatenation on volumes.
This is agnostic of any metadata. So if you want to ensure about coherence of metadata (and data) you must do it yourself
data will be concatenate in the order volumes are provided. Volumes data must be 3D. Concatenate data will be 3D and concatenation will be done
over the axis `axis`
concatenation will be done with a virtual dataset if input volumes and output_volume are HDF5Volume instances.
warning: concatenation enforce writing data and metadata to disk
:param output_volume VolumeBase: volume to create
:param tuple volumes: tuple of VolumeBase instances
:param int axis: axis to use for doing the concatenation. must be in 0, 1, 2
"""
# 0. do some check
if not isinstance(output_volume, VolumeBase):
raise TypeError(
f"output_volume is expected to be an instance of {VolumeBase}. {type(output_volume)} provided"
)
if not isinstance(axis, int):
raise TypeError(f"axis must be an int. {type(axis)} provided")
elif axis not in (0, 1, 2):
raise ValueError(f"axis must be in (0, 1, 2). {axis} provided")
if not isinstance(volumes, tuple):
raise TypeError(f"volumes must be a tuple. {type(volumes)} provided")
else:
is_invalid = lambda y: not isinstance(y, VolumeBase)
invalids = tuple(filter(is_invalid, volumes))
if len(invalids) > 0:
raise ValueError(f"Several non-volumes found. ({invalids})")
# 1. compute final shape
def get_volume_shape():
if axis == 0:
new_shape = [0, None, None]
elif axis == 1:
new_shape = [None, 0, None]
else:
new_shape = [None, None, 0]
for vol in volumes:
vol_shape = vol.get_volume_shape()
if vol_shape is None:
raise ValueError(
f"Unable to find shape for volume {vol.get_identifier().to_str()}"
)
new_shape[axis] += vol_shape[axis]
if axis == 0:
if new_shape[1] is None:
new_shape[1], new_shape[2] = vol_shape[1], vol_shape[2]
elif new_shape[1] != vol_shape[1] or new_shape[2] != vol_shape[2]:
raise ValueError("Found incoherent shapes. Unable to concatenate")
elif axis == 1:
if new_shape[0] is None:
new_shape[0], new_shape[2] = vol_shape[0], vol_shape[2]
elif new_shape[0] != vol_shape[0] or new_shape[2] != vol_shape[2]:
raise ValueError("Found incoherent shapes. Unable to concatenate")
else:
if new_shape[0] is None:
new_shape[0], new_shape[1] = vol_shape[0], vol_shape[1]
elif new_shape[0] != vol_shape[0] or new_shape[1] != vol_shape[1]:
raise ValueError("Found incoherent shapes. Unable to concatenate")
return tuple(new_shape)
final_shape = get_volume_shape()
if final_shape is None:
# should never be raised. Other error type is expected to be raised first
raise RuntimeError("Unable to get final volume shape")
# 2. Handle volume data (concatenation)
if isinstance(output_volume, HDF5Volume) and numpy.all(
[isinstance(vol, HDF5Volume)] for vol in volumes
):
# 2.1 in the case of HDF5 we can short cut this by creating a virtual dataset. Would highly speed up processing avoid copy
# note: in theory this could be done for any input_volume type using external dataset but we don't want to spend ages on
# this use case for now. Some work around this (using EDf) has been done in nxtomomill for information. See https://gitlab.esrf.fr/tomotools/nxtomomill/-/merge_requests/115
_logger.info("start creation of external dataset")
with DatasetReader(volumes[0].data_url) as dataset:
data_type = dataset.dtype
with HDF5File(output_volume.data_url.file_path(), mode="a") as h5s:
# 2.1.1 check data path
if output_volume.data_url.data_path() in h5s:
if output_volume.overwrite:
del h5s[output_volume.data_url.data_path()]
else:
raise OSError(
f"Unable to save data to {output_volume.data_url.data_path()}. This path already exists in {output_volume.data_url.file_path()}. If you want you can ask to overwrite it (from the output volume)."
)
# 2.1.2 create virtual layout
v_layout = h5py.VirtualLayout(
shape=final_shape,
dtype=data_type,
)
# 2.1.3 create virtual source
start_index = 0
for volume in volumes:
# provide relative path
rel_file_path = os.path.relpath(
volume.data_url.file_path(),
os.path.dirname(output_volume.data_url.file_path()),
)
rel_file_path = "./" + rel_file_path
data_path = volume.data_url.data_path()
vol_shape = volume.get_volume_shape()
vs = h5py.VirtualSource(
rel_file_path,
name=data_path,
shape=vol_shape,
)
stop_index = start_index + vol_shape[axis]
if axis == 0:
v_layout[start_index:stop_index] = vs
elif axis == 1:
v_layout[:, start_index:stop_index, :] = vs
elif axis == 2:
v_layout[:, :, start_index:stop_index] = vs
start_index = stop_index
# 2.1.4 create virtual dataset
h5s.create_virtual_dataset(
name=output_volume.data_url.data_path(), layout=v_layout
)
else:
# 2.1 default case (duplicate all input data slice by slice)
# 2.1.1 special case of the concatenation other axis 0
if axis == 0:
def iter_input():
for vol in volumes:
for slice in vol.browse_slices():
yield slice
for frame_dumper, input_slice in zip(
output_volume.data_file_saver_generator(
n_frames=final_shape[0],
data_url=output_volume.data_url,
overwrite=output_volume.overwrite,
),
iter_input(),
):
frame_dumper[:] = input_slice
else:
# 2.1.2 concatenation with data duplication over axis 1 or 2
for i_z, frame_dumper in enumerate(
output_volume.data_file_saver_generator(
n_frames=final_shape[0],
data_url=output_volume.data_url,
overwrite=output_volume.overwrite,
)
):
if axis == 1:
frame_dumper[:] = numpy.concatenate(
[vol.get_slice(axis=0, index=i_z) for vol in volumes],
axis=0,
)
elif axis == 2:
frame_dumper[:] = numpy.concatenate(
[vol.get_slice(axis=0, index=i_z) for vol in volumes],
axis=1,
)
else:
raise RuntimeError
# 3. handle metadata
for vol in volumes:
if vol.metadata is None:
try:
vol.load_metadata(store=True)
except Exception as e:
_logger.error(f"fail to load metadata for {vol}. Error is {e}")
output_volume.metadata = {}
[update_metadata(output_volume.metadata, vol.metadata) for vol in volumes]
output_volume.save_metadata()
def update_metadata(ddict_1: dict, ddict_2: dict) -> dict:
"""
update metadata ddict_1 from ddict_2
metadata are dict. And those dicts
warning: will modify ddict_1
"""
if not isinstance(ddict_1, dict) or not isinstance(ddict_2, dict):
raise TypeError(f"ddict_1 and ddict_2 are expected to be instances of {dict}")
for key, value in ddict_2.items():
if isinstance(value, Mapping):
ddict_1[key] = update_metadata(ddict_1.get(key, {}), value)
else:
ddict_1[key] = value
return ddict_1
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/validator.py 0000644 0236253 0006511 00000041231 00000000000 017656 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""Module containing validators"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "24/08/2021"
import numpy
from tomoscan.scanbase import TomoScanBase
from tomoscan.esrf.scan.utils import dataset_has_broken_vds
from tomoscan.esrf.scan.utils import get_compacted_dataslices
from silx.io.utils import get_data
import logging
from typing import Optional
import weakref
_logger = logging.getLogger(__name__)
_VALIDATOR_NAME_TXT_AJUST = 15
_LOCATION_TXT_AJUST = 40
_SCAN_NAME_TXT_AJUST = 30
_BOMB_UCODE = "\U0001f4A3"
_EXPLOSION_UCODE = "\U0001f4A5"
_THUMB_UP_UCODE = "\U0001f44d"
_OK_UCODE = "\U0001f44c"
class ValidatorBase:
"""Base validator class"""
def is_valid(self) -> bool:
raise NotImplementedError("Base class")
def run(self) -> bool:
raise NotImplementedError("Base class")
def clear(self) -> None:
raise NotImplementedError("Base class")
class _ScanParamValidator(ValidatorBase):
def __init__(self, scan: TomoScanBase, name: str, location: Optional[str]):
if not isinstance(scan, TomoScanBase):
raise TypeError(f"{scan} is expected to be an instance of {TomoScanBase}")
self._scan = weakref.ref(scan)
self.__name = name
self.__location = location
self._valid = None
@property
def name(self):
return self.__name
def __str__(self):
return self.info()
def info(self, with_scan=True):
info = [
self.name.ljust(_VALIDATOR_NAME_TXT_AJUST) + ":",
"VALID".ljust(7) if self.is_valid() else "INVALID".ljust(7),
]
if with_scan:
info.insert(
0,
str(self.scan).ljust(_SCAN_NAME_TXT_AJUST) + " - ",
)
if not self.is_valid():
info.append(
f"Expected location: {self.__location}".ljust(_LOCATION_TXT_AJUST)
)
return " ".join(info)
def _run(self):
"""Function to overwrite to compute the validity condition"""
raise NotImplementedError("Base class")
@property
def scan(self) -> Optional[TomoScanBase]:
if self._scan and self._scan():
return self._scan()
else:
return None
def is_valid(self) -> bool:
if self._valid is None:
self._valid = self.run()
return self._valid
def clear(self):
self._valid = None
def run(self) -> Optional[bool]:
"""
Return None if unable to find if valid or not. Otherwise a boolean
"""
if self.scan is None:
self._valid = None
return None
else:
return self._run()
class DarkEntryValidator(_ScanParamValidator):
"""
Check darks are present and valid
"""
def __init__(self, scan):
super().__init__(
scan=scan,
name="dark(s)",
location=scan.get_dark_expected_location(),
)
def _run(self) -> None:
return self.scan.darks is not None and len(self.scan.darks) > 0
class _VdsAndValuesValidatorMixIn:
def __init__(self, check_values, check_vds):
self._check_values = check_values
self._check_vds = check_vds
self._has_data = None
self._vds_ok = None
self._no_nan = None
@property
def is_valid(self):
raise NotImplementedError("Base class")
@property
def name(self):
raise NotImplementedError("Base class")
@property
def scan(self):
raise NotImplementedError("Base class")
@property
def location(self):
raise NotImplementedError("Base class")
@property
def check_values(self):
return self._check_values
@property
def check_vds(self):
return self._check_vds
def check_urls(self, urls: dict):
if urls is None:
return True
_, compacted_urls = get_compacted_dataslices(urls, return_url_set=True)
if self.check_vds:
# compact urls to speed up
for _, url in compacted_urls.items():
if dataset_has_broken_vds(url=url):
self._vds_ok = False
return False
else:
self._vds_ok = True
if self.check_values:
self._no_nan = True
for _, url in compacted_urls.items():
data = get_data(url)
self._no_nan = self._no_nan and not numpy.isnan(data).any()
return self._no_nan
return True
def clear(self):
self._has_data = None
self._vds_ok = None
self._no_nan = None
def info(self, with_scan=True):
text = "VALID".ljust(7) if self.is_valid() else "INVALID".ljust(7)
if not self._has_data:
text = " - ".join(
(text, f"Unable to find data. Expected location: {self.location}")
)
elif self.check_vds and not self._vds_ok:
text = " - ".join((text, "At least one dataset seems to have broken link"))
elif self.check_values and not self._no_nan:
text = " - ".join(
(text, "At least one dataset seems to contains `nan` value")
)
text = [
f"{self.name}".ljust(_VALIDATOR_NAME_TXT_AJUST) + ":",
text,
]
if with_scan:
text.insert(0, f"{str(self.scan)}".ljust(_SCAN_NAME_TXT_AJUST) + ",")
return " ".join(text)
class DarkDatasetValidator(DarkEntryValidator, _VdsAndValuesValidatorMixIn):
"""Check entries exists and values are valid"""
def __init__(self, scan, check_vds, check_values):
DarkEntryValidator.__init__(self, scan=scan)
_VdsAndValuesValidatorMixIn.__init__(
self, check_vds=check_vds, check_values=check_values
)
def _run(self) -> bool:
# check darks exists
self._has_data = DarkEntryValidator._run(self)
if self._has_data is False:
return False
return _VdsAndValuesValidatorMixIn.check_urls(self, self.scan.darks)
def info(self, with_scan=True):
return _VdsAndValuesValidatorMixIn.info(self, with_scan)
class FlatEntryValidator(_ScanParamValidator):
"""
Check flats are present and valid
"""
def __init__(self, scan):
super().__init__(
scan=scan, name="flat(s)", location=scan.get_flat_expected_location()
)
def _run(self) -> Optional[bool]:
return self.scan.flats is not None and len(self.scan.flats) > 0
class FlatDatasetValidator(FlatEntryValidator, _VdsAndValuesValidatorMixIn):
"""Check entries exists and values are valid"""
def __init__(self, scan, check_vds, check_values):
FlatEntryValidator.__init__(self, scan=scan)
_VdsAndValuesValidatorMixIn.__init__(
self, check_vds=check_vds, check_values=check_values
)
def _run(self) -> bool:
# check darks exists
self._has_data = FlatEntryValidator._run(self)
if self._has_data is False:
return False
return _VdsAndValuesValidatorMixIn.check_urls(self, self.scan.flats)
def info(self, with_scan=True):
return _VdsAndValuesValidatorMixIn.info(self, with_scan)
class ProjectionEntryValidator(_ScanParamValidator):
"""
Check at projections are present and seems coherent with what is expected
"""
def __init__(self, scan):
super().__init__(
scan=scan,
name="projection(s)",
location=scan.get_projection_expected_location(),
)
def _run(self) -> Optional[bool]:
if self.scan.projections is None:
return False
elif self.scan.tomo_n is not None:
return len(self.scan.projections) == self.scan.tomo_n
else:
return len(self.scan.projections) > 0
class ProjectionDatasetValidator(ProjectionEntryValidator, _VdsAndValuesValidatorMixIn):
"""Check projections frames exists and values seems valid"""
def __init__(self, scan, check_vds, check_values):
ProjectionEntryValidator.__init__(self, scan=scan)
_VdsAndValuesValidatorMixIn.__init__(
self, check_vds=check_vds, check_values=check_values
)
def _run(self) -> bool:
# check darks exists
self._has_data = ProjectionEntryValidator._run(self)
if self._has_data is False:
return False
return _VdsAndValuesValidatorMixIn.check_urls(self, self.scan.projections)
def info(self, with_scan=True):
return _VdsAndValuesValidatorMixIn.info(self, with_scan)
class EnergyValidator(_ScanParamValidator):
"""Check energy can be read and is not 0"""
def __init__(self, scan):
super().__init__(
scan=scan,
name="energy",
location=scan.get_energy_expected_location(),
)
def _run(self) -> Optional[bool]:
return self.scan.energy not in (None, 0)
class DistanceValidator(_ScanParamValidator):
"""Check distance can be read and is not 0"""
def __init__(self, scan):
super().__init__(
scan=scan,
name="distance",
location=scan.get_distance_expected_location(),
)
def _run(self) -> Optional[bool]:
return self.scan.distance not in (None, 0)
class PixelValidator(_ScanParamValidator):
"""Check pixel size can be read and is / are not 0"""
def __init__(self, scan):
super().__init__(
scan=scan,
name="pixel size",
location=scan.get_pixel_size_expected_location(),
)
def _run(self) -> Optional[bool]:
from tomoscan.esrf.hdf5scan import HDF5TomoScan
if isinstance(self.scan, HDF5TomoScan):
return (self.scan.x_pixel_size not in (None, 0)) and (
self.scan.y_pixel_size not in (None, 0)
)
else:
return self.scan.pixel_size not in (None, 0)
class _ValidatorGroupMixIn:
"""
Represents a group of validators.
Define a `checkup` function to display a resume of valid and invalid tasks
"""
def __init__(self):
self._validators = []
def checkup(self, only_issues=False) -> str:
"""
compute a short text with:
* if only_issues is False: all information checked and the status of the information
* if only_issues is true: all mandatory information missing
"""
def _is_invalid(validator):
return not validator.is_valid()
validators_with_issues = tuple(filter(_is_invalid, self._validators))
def get_first_chars(validator):
if validator.is_valid():
return "+"
else:
return "-"
if only_issues:
if len(validators_with_issues) == 0:
text = self.get_text_no_issue() + "\n"
else:
text = [
f" {get_first_chars(validator)} {validator.info(with_scan=False)}"
for validator in validators_with_issues
]
text.insert(0, self.get_text_issue(len(validators_with_issues)))
text.append(" ")
text = "\n".join(text)
else:
text = [
f" {get_first_chars(validator)} {validator.info(with_scan=False)}"
for validator in self._validators
]
if len(validators_with_issues) == 0:
text.insert(0, self.get_text_no_issue())
else:
text.insert(0, self.get_text_issue(len(validators_with_issues)))
text.append(" ")
text = "\n".join(text)
return text
def is_valid(self) -> bool:
valid = True
for validator in self._validators:
assert isinstance(
validator, ValidatorBase
), "validators should be instances of ValidatorBase"
valid = valid + validator.is_valid()
return valid
def _run(self) -> Optional[bool]:
run_ok = True
for validator in self._validators:
run_ok = run_ok and validator.run()
return run_ok
def clear(self) -> None:
[validator.clear() for validator in self._validators]
def get_text_no_issue(self) -> str:
raise NotImplementedError("Base class")
def get_text_issue(self, n_issue) -> str:
raise NotImplementedError("Base class")
class BasicScanValidator(_ValidatorGroupMixIn, ValidatorBase):
"""Check that a scan has some basic parameters as dark, flat..."""
def __init__(
self, scan, check_vds=True, check_dark=True, check_flat=True, check_values=False
):
super(BasicScanValidator, self).__init__()
if not isinstance(scan, TomoScanBase):
raise TypeError(f"{scan} is expected to be an instance of {TomoScanBase}")
self._scan = scan
self._validators.append(
ProjectionDatasetValidator(
scan=scan, check_values=check_values, check_vds=check_vds
)
)
if check_dark:
self._validators.append(
DarkDatasetValidator(
scan=scan, check_values=check_values, check_vds=check_vds
)
)
if check_flat:
self._validators.append(
FlatDatasetValidator(
scan=scan, check_values=check_values, check_vds=check_vds
)
)
@property
def scan(self):
return self._scan
def get_text_no_issue(self) -> str:
header = f"{_OK_UCODE}{_THUMB_UP_UCODE}{_OK_UCODE}"
return f"{header}\n No issue found from {self.scan}."
def get_text_issue(self, n_issue) -> str:
header = f"{_EXPLOSION_UCODE}{_BOMB_UCODE}{_EXPLOSION_UCODE}"
return f"{header}\n {n_issue} issues found from {self.scan}"
class ReconstructionValidator(BasicScanValidator):
"""
Check that a dataset/scan has enought valid parameters to be reconstructed
by a software like nabu
"""
def __init__(
self,
scan: TomoScanBase,
check_phase_retrieval=True,
check_values=False,
check_vds=True,
check_dark=True,
check_flat=True,
):
super().__init__(
scan=scan,
check_dark=check_dark,
check_flat=check_flat,
check_values=check_values,
check_vds=check_vds,
)
self._need_phase_retrieval = check_phase_retrieval
if self.check_phase_retrieval:
self._validators.append(DistanceValidator(scan=scan))
self._validators.append(EnergyValidator(scan=scan))
self._validators.append(PixelValidator(scan=scan))
@property
def check_phase_retrieval(self):
return self._need_phase_retrieval
@check_phase_retrieval.setter
def check_phase_retrieval(self, check):
self._need_phase_retrieval = check
def is_valid_for_reconstruction(
scan: TomoScanBase, need_phase_retrieval: bool = True, check_values: bool = False
):
"""
check `scan` contains necessary and valid information to be reconstructed.
:param TomoScanBase scan: scan to be checked
:param bool check_values: If true check data for phase retrieval (energy, sample/detector distance...)
:param bool check_datasets: open datasets to check for nan values or broken links to file
"""
checker = ReconstructionValidator(
scan=scan,
check_phase_retrieval=need_phase_retrieval,
check_values=check_values,
)
return checker.is_valid()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684328172.0
tomoscan-1.2.2/tomoscan/version.py 0000644 0236253 0006511 00000010372 00000000000 017360 0 ustar 00payno soft 0000000 0000000 #!/usr/bin/env python
# coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2015-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""Unique place where the version number is defined.
provides:
* version = "1.2.3" or "1.2.3-beta4"
* version_info = named tuple (1,2,3,"beta",4)
* hexversion: 0x010203B4
* strictversion = "1.2.3b4
* debianversion = "1.2.3~beta4"
* calc_hexversion: the function to transform a version_tuple into an integer
This is called hexversion since it only really looks meaningful when viewed as the
result of passing it to the built-in hex() function.
The version_info value may be used for a more human-friendly encoding of the same information.
The hexversion is a 32-bit number with the following layout:
Bits (big endian order) Meaning
1-8 PY_MAJOR_VERSION (the 2 in 2.1.0a3)
9-16 PY_MINOR_VERSION (the 1 in 2.1.0a3)
17-24 PY_MICRO_VERSION (the 0 in 2.1.0a3)
25-28 PY_RELEASE_LEVEL (0xA for alpha, 0xB for beta, 0xC for release candidate and 0xF for final)
29-32 PY_RELEASE_SERIAL (the 3 in 2.1.0a3, zero for final releases)
Thus 2.1.0a3 is hexversion 0x020100a3.
"""
from collections import namedtuple
__authors__ = ["Jérôme Kieffer"]
__license__ = "MIT"
__copyright__ = "European Synchrotron Radiation Facility, Grenoble, France"
__date__ = "28/02/2018"
__status__ = "production"
__docformat__ = "restructuredtext"
__all__ = [
"date",
"version_info",
"strictversion",
"hexversion",
"debianversion",
"calc_hexversion",
]
RELEASE_LEVEL_VALUE = {
"dev": 0,
"alpha": 10,
"beta": 11,
"gamma": 12,
"rc": 13,
"final": 15,
}
MAJOR = 1
MINOR = 2
MICRO = 2
RELEV = "final" # <16
SERIAL = 0 # <16
date = __date__
_version_info = namedtuple(
"version_info", ["major", "minor", "micro", "releaselevel", "serial"]
)
version_info = _version_info(MAJOR, MINOR, MICRO, RELEV, SERIAL)
strictversion = version = debianversion = "%d.%d.%d" % version_info[:3]
if version_info.releaselevel != "final":
version += "-%s%s" % version_info[-2:]
debianversion += (
"~adev%i" % version_info[-1] if RELEV == "dev" else "~%s%i" % version_info[-2:]
)
prerel = "a" if RELEASE_LEVEL_VALUE.get(version_info[3], 0) < 10 else "b"
if prerel not in "ab":
prerel = "a"
strictversion += prerel + str(version_info[-1])
def calc_hexversion(major=0, minor=0, micro=0, releaselevel="dev", serial=0):
"""Calculate the hexadecimal version number from the tuple version_info:
:param major: integer
:param minor: integer
:param micro: integer
:param relev: integer or string
:param serial: integer
:return: integer always increasing with revision numbers
"""
try:
releaselevel = int(releaselevel)
except ValueError:
releaselevel = RELEASE_LEVEL_VALUE.get(releaselevel, 0)
hex_version = int(serial)
hex_version |= releaselevel * 1 << 4
hex_version |= int(micro) * 1 << 8
hex_version |= int(minor) * 1 << 16
hex_version |= int(major) * 1 << 24
return hex_version
hexversion = calc_hexversion(*version_info)
if __name__ == "__main__":
print(version)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684327784.0
tomoscan-1.2.2/tomoscan/volumebase.py 0000644 0236253 0006511 00000045500 00000000000 020036 0 ustar 00payno soft 0000000 0000000 # coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2016-2022 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
"""module to define base class for a volume"""
__authors__ = ["H. Payno"]
__license__ = "MIT"
__date__ = "27/01/2022"
import numpy
from tomoscan.identifier import VolumeIdentifier
from tomoscan.tomoobject import TomoObject
from tomoscan.scanbase import TomoScanBase
from tomoscan.utils import BoundingBox1D, BoundingBox3D
from tomoscan.unitsystem.metricsystem import MetricSystem
from typing import Optional, Union
from silx.io.url import DataUrl
from silx.math.combo import ( # pylint: disable=E0611 (not found from the pyx file)
min_max,
)
from silx.utils.deprecation import deprecated_warning
class VolumeBase(TomoObject):
"""
context: we aim at having a common way of saving and loading volumes through the tomotools suite.
The goal is to aim handling of volumes when creating them or doing some operations with those like stitching...
:param DataUlr url: url of the volume. Could be path to a master file if we can provide one per each volume. Otherwise could be a pattern of edf files or tiff file with a data range
:param Optional[TomoScanBase] source_scan: potential instance of TomoScanBase in order to get extra information. This could be saved in the volume file to (external link)
:param Optional[nump.ndarray] data: volume data. Expected to be 3D
:param Optional[dict] metadata: metadata associated to the volume. Must be a dict of serializable object
:param Optional[DataUrl] data_url: url to save the data. If provided url must not be provided. If an object is constructed from data and metadta url then no rule to create a VolumeIdentifier can be created and call to et_identifier will raise an error.
:param Optional[DataUrl] metadata_url: url to save the metadata. If provided url must not be provided. If an object is constructed from data and metadta url then no rule to create a VolumeIdentifier can be created and call to et_identifier will raise an error.
:param bool overwrite: when save the data if encounter a ressource already existing overwrite it (if True) or not.
:param str overwrite: when save the data if encounter a ressource already existing overwrite it (if True) or not.
:raises TypeError:
:raises ValueError: * if data is a numpy array and not 3D.
:raises OSError:
"""
EXTENSION = None
def __init__(
self,
url: Optional[DataUrl] = None,
data: Optional[numpy.ndarray] = None,
source_scan: Optional[TomoScanBase] = None,
metadata: Optional[dict] = None,
data_url: Optional[DataUrl] = None,
metadata_url: Optional[DataUrl] = None,
overwrite: bool = False,
data_extension: Optional[str] = None,
metadata_extension: Optional[str] = None,
) -> None:
super().__init__()
if url is not None and (data_url is not None or metadata_url is not None):
raise ValueError(
"Either url or (data_url and / or metadata_url) can be provided not both"
)
# warning on source_scan: should be defined before the url because deduce_data_and_metadata_urls might need it
# Then as source scan can imply several modification of url... we can only define it during construction and this
# must not involve with object life
if not isinstance(source_scan, (TomoScanBase, type(None))):
raise TypeError(
f"source scan is expected to be None or an instance of TomoScanBase. Not {type(source_scan)}"
)
self.__source_scan = source_scan
self._data_extension = data_extension
self._metadata_extension = metadata_extension
self.overwrite = overwrite
self.url = url
self.metadata = metadata
self.data = data
if url is None:
self._data_url = data_url
self._metadata_url = metadata_url
else:
# otherwise have be setted when url has been set from call to deduce_data_and_metadata_urls
pass
@property
def url(self):
return self._url
@url.setter
def url(self, url: Optional[DataUrl]) -> None:
if url is not None and not isinstance(url, DataUrl):
raise TypeError
self._url = url
self._data_url, self._metadata_url = self.deduce_data_and_metadata_urls(url)
def deduce_data_and_metadata_urls(self, url: Optional[DataUrl]) -> tuple:
"""
compute data and metadata urls from 'parent url'
:return: data_url: Optional[DataUrl], metadata_url: Optional[DataUrl]
"""
raise NotImplementedError("Base class")
@property
def data_extension(self):
return self._data_extension
@property
def metadata_extension(self):
return self._metadata_extension
@property
def data_url(self):
return self._data_url
@property
def metadata_url(self):
return self._metadata_url
@property
def data(self) -> Optional[numpy.ndarray]:
return self._data
@data.setter
def data(self, data):
if not isinstance(data, (numpy.ndarray, type(None))):
raise TypeError(
f"data is expected to be None or a numpy array not {type(data)}"
)
if isinstance(data, numpy.ndarray) and data.ndim != 3:
raise ValueError(f"data is expected to be 3D and not {data.ndim}D.")
self._data = data
def get_slice(
self,
index=None,
axis=None,
xy=None,
xz=None,
yz=None,
url: Optional[DataUrl] = None,
):
if xy is yz is xz is axis is None:
raise ValueError("axis should be provided")
if self.data is None:
# fixme: must be redefined by inheriting classes.
# for example for single base frame we are simply loading the full volume instead of retrieving the
# file. This is a bottleneck especially for xy slice because all the files are loaded instead of one
# in the worst case.
self.load_data(url=url, store=True)
if self.data is not None:
return self.select(
volume=self.data, xy=xy, xz=xz, yz=yz, axis=axis, index=index
)
else:
return None
@property
def metadata(self) -> Optional[dict]:
return self._metadata
@metadata.setter
def metadata(self, metadata: Optional[dict]):
if not isinstance(metadata, (dict, type(None))):
raise TypeError(
f"metadata is expected to be None or a dict not {type(metadata)}"
)
self._metadata = metadata
@staticmethod
def example_defined_from_str_identifier() -> str:
"""example as string to explain how users can defined identifiers from a string"""
raise NotImplementedError("Base class")
def clear_cache(self):
"""remove object stored in data and medatada"""
self.data = None
self.metadata = None
# generic function requested
@property
def source_scan(self) -> Optional[TomoScanBase]:
return self.__source_scan
@property
def overwrite(self) -> bool:
return self._overwrite
@overwrite.setter
def overwrite(self, overwrite: bool) -> None:
if not isinstance(overwrite, bool):
raise TypeError
self._overwrite = overwrite
# function to be loaded to an url
@staticmethod
def from_identifier(identifier: Union[str, VolumeIdentifier]):
"""Return the Dataset from a identifier"""
raise NotImplementedError("Base class")
def get_identifier(self) -> VolumeIdentifier:
"""dataset unique identifier. Can be for example a hdf5 and
en entry from which the dataset can be rebuild"""
raise NotImplementedError("Base class")
# utils required for operations like stitching
@staticmethod
def _insure_reconstruction_dict_exists(ddict):
if "processing_options" not in ddict:
ddict["processing_options"] = {}
if "reconstruction" not in ddict["processing_options"]:
ddict["processing_options"]["reconstruction"] = {}
@property
def position(self) -> tuple:
"""position are provided as a tuple using the same reference for axis as the volume data"""
metadata = self.metadata or self.load_metadata()
return tuple(
metadata.get("processing_options", {})
.get("reconstruction", {})
.get("position", None)
)
@position.setter
def position(self, position) -> None:
if self.metadata is None:
self.metadata = {}
self._insure_reconstruction_dict_exists(self.metadata)
self.metadata["processing_options"]["reconstruction"]["position"] = numpy.array(
position
)
@property
def pixel_size(self):
metadata = self.metadata or self.load_metadata()
pixel_size = (
metadata.get("processing_options", {})
.get("reconstruction", {})
.get("pixel_size_cm", None)
)
if pixel_size is not None:
return pixel_size * MetricSystem.CENTIMETER.value
else:
return None
@pixel_size.setter
def pixel_size(self, pixel_size) -> None:
if self.metadata is None:
self.metadata = {}
self._insure_reconstruction_dict_exists(self.metadata)
self.metadata["processing_options"]["reconstruction"]["pixel_size_cm"] = (
pixel_size / MetricSystem.CENTIMETER.value
)
def get_volume_shape(self):
raise NotImplementedError("Base class")
def get_bounding_box(self, axis: Optional[Union[str, int]] = None):
if axis is None:
x_bb = self.get_bounding_box(axis="x")
y_bb = self.get_bounding_box(axis="y")
z_bb = self.get_bounding_box(axis="z")
return BoundingBox3D(
(z_bb.min, y_bb.min, x_bb.min),
(z_bb.max, y_bb.max, x_bb.max),
)
position = self.position
shape = self.get_volume_shape()
# TODO: does it make sense that pixel size is a scalar ?
pixel_size = self.pixel_size
missing = []
if position is None:
missing.append("position")
if shape is None:
missing.append("shape")
raise ValueError("Unable to find volume shape")
if pixel_size is None:
missing.append("pixel_size")
if len(missing) > 0:
raise ValueError(
f"Unable to get bounding box. Missing information: {'; '.join(missing)}"
)
else:
assert axis is not None
if axis == "x":
idx = 2
elif axis == "y":
idx = 1
elif axis == "z":
idx = 0
else:
raise ValueError(f"axis '{axis}' is not handled")
min_pos_in_meter = position[idx] - pixel_size * shape[idx] / 2.0
max_pos_in_meter = position[idx] + pixel_size * shape[idx] / 2.0
return BoundingBox1D(min_pos_in_meter, max_pos_in_meter)
def get_min_max(self) -> tuple:
"""
compute min max of the volume. Can take some time but avoid to load the full volume in memory
"""
if self.data is not None:
return self.data.min(), self.data.max()
else:
min_v, max_v = None, None
for s in self.browse_slices():
min_v = min(min_v, s.min()) if min_v is not None else s.min()
max_v = max(max_v, s.max()) if max_v is not None else s.max()
return min_v, max_v
# load / save stuff
@property
def extension(self) -> str:
return self.EXTENSION
def load(self):
self.load_metadata(store=True)
# always load metadata first because we might expect to get some information from
# it in order to load data next
self.load_data(store=True)
def save(self, url: Optional[DataUrl] = None, **kwargs):
if url is not None:
data_url, metadata_url = self.deduce_data_and_metadata_urls(url=url)
else:
data_url = self.data_url
metadata_url = self.metadata_url
self.save_data(data_url, **kwargs)
if self.metadata is not None:
# a volume is not force to have metadata to save. But calling save_metadata direclty might raise an error
# if no metadata found
self.save_metadata(metadata_url)
def save_data(self, url: Optional[DataUrl] = None, **kwargs) -> None:
"""
save data to the provided url or existing one if none is provided
"""
raise NotImplementedError("Base class")
def save_metadata(self, url: Optional[DataUrl] = None) -> None:
"""
save metadata to the provided url or existing one if none is provided
"""
raise NotImplementedError("Base class")
def load_data(
self, url: Optional[DataUrl] = None, store: bool = True
) -> numpy.ndarray:
raise NotImplementedError("Base class")
def load_metadata(self, url: Optional[DataUrl] = None, store: bool = True) -> dict:
raise NotImplementedError("Base class")
def check_can_provide_identifier(self):
if self.url is None:
raise ValueError(
"Unable to provide an identifier. No url has been provided"
)
@staticmethod
def select(volume, xy=None, xz=None, yz=None, axis=None, index=None):
if xy is not None:
deprecated_warning(
type_="parameter",
name="xy",
replacement="axis and index",
)
if axis is None and index is None:
axis = 0
index = xy
else:
raise ValueError("several axis (previously xy, xz, yz requested")
elif xz is not None:
deprecated_warning(
type_="parameter",
name="xz",
replacement="axis and index",
)
if axis is None and index is None:
axis = 1
index = xz
else:
raise ValueError("several axis (previously xy, xz, yz requested")
elif yz is not None:
deprecated_warning(
type_="parameter",
name="yz",
replacement="axis and index",
)
if axis is None and index is None:
axis = 2
index = yz
else:
raise ValueError("several axis (previously xy, xz, yz requested")
if not volume.ndim == 3:
raise TypeError(f"volume is expected to be 3D. {volume.ndim}D provided")
if axis == 0:
return volume[index]
elif axis == 1:
return volume[:, index]
elif axis == 2:
return volume[:, :, index]
else:
raise ValueError(f"axis {axis} is not handled")
def browse_data_files(self, url=None):
"""
:param url: data url. If not provided will take self.data_url
return a generator go through all the existings files associated to the data volume
"""
raise NotImplementedError("Base class")
def browse_metadata_files(self, url=None):
"""
:param url: metadata url. If not provided will take self.metadata_url
return a generator go through all the existings files associated to the data volume
"""
raise NotImplementedError("Base class")
def browse_data_urls(self, url=None):
"""
generator on data urls used.
:param url: data url to be used. If not provided will take self.data_url
"""
raise NotImplementedError("Base class")
def browse_slices(self, url=None):
"""
generator of 2D numpy array representing a slice
:param url: data url to be used. If not provided will browse self.data if exists else self.data_url
:warning: this will get the slice from the data on disk and never use `data` property.
so before browsing slices you might want to check if data is already loaded
"""
raise NotImplementedError("Base class")
def load_chunk(self, chunk, url=None):
"""
Load a sub-volume.
:param chunk: tuple of slice objects indicating which chunk of the volume has to be loaded.
:param url: data url to be used. If not provided will take self.data_url
"""
raise NotImplementedError("Base class")
def get_min_max_values(self, url=None) -> tuple:
"""
compute min max over 'data' if exists else browsing the volume slice by slice
:param url: data url to be used. If not provided will take self.data_url
"""
min_v = None
max_v = None
if self.data is not None:
data = self.data
else:
data = self.browse_slices(url=url)
for slice in data:
if min_v is None:
min_v = slice.min()
max_v = slice.max()
else:
min_lv, max_lv = min_max(slice, finite=True)
min_v = min(min_v, min_lv)
max_v = max(max_v, max_lv)
return min_v, max_v
def data_file_saver_generator(self, n_frames, data_url: DataUrl, overwrite: bool):
"""
Provide a helper class to dump data frame by frame. For know the only possible interaction is
Helper[:] = frame
:param int n_frames: number of frame the final volume will contain
:param DataUrl data_url: url to dump data
:param bool overwrite: overwrite existing file ?
"""
raise NotImplementedError("Base class")
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1684328205.161433
tomoscan-1.2.2/tomoscan.egg-info/ 0000755 0236253 0006511 00000000000 00000000000 017010 5 ustar 00payno soft 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684328204.0
tomoscan-1.2.2/tomoscan.egg-info/PKG-INFO 0000644 0236253 0006511 00000001667 00000000000 020117 0 ustar 00payno soft 0000000 0000000 Metadata-Version: 2.1
Name: tomoscan
Version: 1.2.2
Summary: "utilitary to access tomography data at esrf"
Home-page: https://gitlab.esrf.fr/tomotools/tomoscan
Author: data analysis unit
Author-email: henri.payno@esrf.fr
License: MIT
Project-URL: Bug Tracker, https://gitlab.esrf.fr/tomotools/tomoscan/-/issues
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Environment :: Console
Classifier: Environment :: X11 Applications :: Qt
Classifier: Operating System :: POSIX
Classifier: Natural Language :: English
Classifier: Topic :: Scientific/Engineering :: Physics
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.6
Description-Content-Type: text/markdown
Provides-Extra: doc
Provides-Extra: full
Provides-Extra: setup_requires
License-File: LICENSE
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684328204.0
tomoscan-1.2.2/tomoscan.egg-info/SOURCES.txt 0000644 0236253 0006511 00000005265 00000000000 020704 0 ustar 00payno soft 0000000 0000000 LICENSE
README.md
setup.cfg
setup.py
tomoscan/__init__.py
tomoscan/factory.py
tomoscan/framereducerbase.py
tomoscan/identifier.py
tomoscan/io.py
tomoscan/normalization.py
tomoscan/progress.py
tomoscan/scanbase.py
tomoscan/scanfactory.py
tomoscan/serie.py
tomoscan/tomoobject.py
tomoscan/validator.py
tomoscan/version.py
tomoscan/volumebase.py
tomoscan.egg-info/PKG-INFO
tomoscan.egg-info/SOURCES.txt
tomoscan.egg-info/dependency_links.txt
tomoscan.egg-info/requires.txt
tomoscan.egg-info/top_level.txt
tomoscan/esrf/__init__.py
tomoscan/esrf/edfscan.py
tomoscan/esrf/hdf5scan.py
tomoscan/esrf/mock.py
tomoscan/esrf/utils.py
tomoscan/esrf/identifier/__init__.py
tomoscan/esrf/identifier/edfidentifier.py
tomoscan/esrf/identifier/folderidentifier.py
tomoscan/esrf/identifier/hdf5Identifier.py
tomoscan/esrf/identifier/jp2kidentifier.py
tomoscan/esrf/identifier/rawidentifier.py
tomoscan/esrf/identifier/tiffidentifier.py
tomoscan/esrf/identifier/url_utils.py
tomoscan/esrf/scan/__init__.py
tomoscan/esrf/scan/edfscan.py
tomoscan/esrf/scan/hdf5scan.py
tomoscan/esrf/scan/mock.py
tomoscan/esrf/scan/utils.py
tomoscan/esrf/scan/framereducer/__init__.py
tomoscan/esrf/scan/framereducer/edfframereducer.py
tomoscan/esrf/scan/framereducer/hdf5framereducer.py
tomoscan/esrf/volume/__init__.py
tomoscan/esrf/volume/edfvolume.py
tomoscan/esrf/volume/hdf5volume.py
tomoscan/esrf/volume/jp2kvolume.py
tomoscan/esrf/volume/mock.py
tomoscan/esrf/volume/rawvolume.py
tomoscan/esrf/volume/singleframebase.py
tomoscan/esrf/volume/tiffvolume.py
tomoscan/esrf/volume/utils.py
tomoscan/nexus/__init__.py
tomoscan/nexus/paths/__init__.py
tomoscan/nexus/paths/nxdetector.py
tomoscan/nexus/paths/nxinstrument.py
tomoscan/nexus/paths/nxmonitor.py
tomoscan/nexus/paths/nxsample.py
tomoscan/nexus/paths/nxsource.py
tomoscan/nexus/paths/nxtomo.py
tomoscan/test/__init__.py
tomoscan/test/conftest.py
tomoscan/test/test_framereducerbase.py
tomoscan/test/test_hdf5_utils.py
tomoscan/test/test_io.py
tomoscan/test/test_normalization.py
tomoscan/test/test_progress.py
tomoscan/test/test_scanbase.py
tomoscan/test/test_scanfactory.py
tomoscan/test/test_serie.py
tomoscan/test/test_tomoobject.py
tomoscan/test/test_utils.py
tomoscan/test/test_validator.py
tomoscan/test/test_version.py
tomoscan/test/test_volume_base.py
tomoscan/test/test_volume_utils.py
tomoscan/test/utils.py
tomoscan/unitsystem/__init__.py
tomoscan/unitsystem/electriccurrentsystem.py
tomoscan/unitsystem/energysystem.py
tomoscan/unitsystem/metricsystem.py
tomoscan/unitsystem/timesystem.py
tomoscan/unitsystem/unit.py
tomoscan/unitsystem/voltagesystem.py
tomoscan/utils/__init__.py
tomoscan/utils/decorator.py
tomoscan/utils/geometry.py
tomoscan/utils/hdf5.py
tomoscan/utils/io.py
tomoscan/utils/volume.py ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684328204.0
tomoscan-1.2.2/tomoscan.egg-info/dependency_links.txt 0000644 0236253 0006511 00000000001 00000000000 023056 0 ustar 00payno soft 0000000 0000000
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684328204.0
tomoscan-1.2.2/tomoscan.egg-info/requires.txt 0000644 0236253 0006511 00000000454 00000000000 021413 0 ustar 00payno soft 0000000 0000000 setuptools
h5py>=3.0
silx>=0.14a
lxml
dicttoxml
packaging
[doc]
Sphinx<5.2.0,>=4.0.0
nbsphinx
pandoc
ipykernel
jupyter_client
nbconvert
h5glance
pytest
[full]
Sphinx<5.2.0,>=4.0.0
nbsphinx
pandoc
ipykernel
jupyter_client
nbconvert
h5glance
pytest
glymur
tifffile
[setup_requires]
setuptools
numpy
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1684328204.0
tomoscan-1.2.2/tomoscan.egg-info/top_level.txt 0000644 0236253 0006511 00000000011 00000000000 021532 0 ustar 00payno soft 0000000 0000000 tomoscan