Memory optimization PYNXTOOLS validate#752
Conversation
…ersion of is_valid_enum which defers the loading of data to occurr only when required, most h5py.Datasets do not qualify as an enum. This change could also in future version of the template-base validation be added. That is not expected to be so performance improving as in the case of template based validation prior to NeXus file writing all data are already in main memory. For HDF5 though they need to loaded first from disk, so far the clean_str_attr(dataset[()]) did that ALWAYS and for all datasets irrespective their size, just to discard most of these data with the next two lines when the code figures the dataset is not an instance of an enum concept
… resolving issues for em
…initial round of tests with float32 there validation is now blazing fast for all three cases (contiguous, chunked uncompressed, chunked compressed), the substantial speedup is not a signature of failure or bypassing certain function calls but expected, in the contiguous float32 test case the concept entry1/measurement/event1/image1/real is float32 yes its 10GB uncompressed payload. But keep in mind that h5py and HDF5 store datasets of elementary datatypes always as type-homogeneous arrays, that means a test if the field is valid reduces to a simple test of the dtype arribute against accepted ones, no need to load the data at all, unless we have NX_POSINT as datatype. Similar expected speedup signified alsso compared to v0.13.2 is valid enum, in that version the data were always fully loaded first just to become, in most cases, immediately discarded inside the function call as the node did not encode an enum type concept, this validation feature branch and commit here tested though check FIRST if the node is an enum, thus we can completely skip the loading, thus bringing essentially the speedup. We expect that we should test again with a concept of type NX_POSINT and our three storing schemes contiguous, chunked compressed and uncompressed, here we would expect to get a costly load of all data when contiguous and a chunk-by-chunk positivity integer test during the is valid field hdf evaluation, but the base line should remain flat. Next-up, i) implement this test with posint and run again, ii) fix pytest from pynxtools, iii) move all testing code to proper tests, mark this feature branch as ready for review, i.e. move from draft to ready
…sed in the code to work on datasets and non-string payload in which case type annotations were imprecise
rettigl
left a comment
There was a problem hiding this comment.
I appreciate the effort, but for me this PR includes too many changes to properly review. The connection of many of these changes to the issue at hand (large memory consumption) is not clear to me. I suggest to split it up into several PRs.
Also, it changes the behavior of the validator by removing the acceptance of ints as floats, which we explicitly decided to allow at some point, as far as I remember.
Thank you @rettigl Wrt to the last point, there is a good reason why in NeXus we distinguish NX_NUMBER, NX_(U,POS)INT, and NX_FLOAT. I do not agree that this decision in the past was a good one. IMHO a validator should check a report where there are inconsistencies it should not for convenience purposes do conversion of types. This should be the pynxtools-readers responsibility to feed to pynxtools straightaway. I can see if people may be fine with just using the default precision of data types, like python inbuild float, int but I disagree with cross-accepting what are different elementary data types. Not only for the fact that currently our implementation then would also need to check if really every precision of an int is exactly mappable on a floating with certain precision. |
lukaspie
left a comment
There was a problem hiding this comment.
As it stands, I would not merge this PR.
- I agree that we need separate function for validity checks (HDF5 vs, template), since we don't always want to load full HDF5 datasets.
- Probably, we should split up the HDF5 and template validation in separate modules eventually.
helpers.pyis already rather overloaded. But let's do this in a separate PR. - Regarding namefitting, I left a longer comment. I don't agree with the idea that sorting by optionality is really better than what we had before.
- Changes to the definitions do not belong here.
- vscode settings are too specific: as long as the NeXus files that are mentioned there are not part of the repo, we cannot use the
launch.json. I am not suggesting to add these files, rather the settings should be kept simpler. - Regarding "validation, not conversion" -> on a technical design level, I would agree with this. However, my rationale for allowing simple conversion (or other workarounds, like adding the target attribute automatically) was always to allow for easier reader development. Maybe one compromise could be that we allow such conversions in the the template validation, whereas for the HDF5 validation (where also non-pynxtools-generated files can be used), we are more strict. Curious about your opinion @rettigl.
…match but adding analysis part
…polluting all current tests with additional cases where indeed multiple names showed up
|
|
|
||
|
|
||
| def clean_str_attr(attr: str | bytes | None, encoding: str = "utf-8") -> str | None: | ||
| def decode_if_bytes(payload: Any, encoding: str = "utf-8") -> Any: |
There was a problem hiding this comment.
@RubelMozumder note how this PR has another decoding step, similar to what you have in #778. We should align this in one central function.
…red snippet to work through HDF5 string payload
lukaspie
left a comment
There was a problem hiding this comment.
Probably okay, but I will have to test thisin action to understand its full implications
Planned for
v0.14.0The other aspect of #737:
For HDF5 file base
validatewe wish to reduce the number of cases when large h5py.Dataset payload needs to be loaded into main memory, especially when using chunked storage layout that would allow iterating over chunks.Motivation was usage of the standalone HDF validator on a 32GiB file which has essentially only one dataset (a large image stack uncompressed approx 32GiB payload) from CENEM on a 128GiB RAM workstation. Despite that image stack is stored chunked, all data gets loaded somewhen and eventually generating multiple copies in the process into main memory, causing the validation process to terminated after trying to allocate past all the memory available on the workstation.
Keys benefits: