Need to discuss how we handle this:
|
def extract_input_data(self, input_data: InputDataGLM): |
|
self._design_loc = input_data.design_loc |
|
self._design_scale = input_data.design_scale |
|
self._size_factors = input_data.size_factors |
|
self._constraints_loc = input_data.constraints_loc |
|
self._constraints_scale = input_data.constraints_scale |
|
self._design_loc_names = input_data.design_loc_names |
|
self._design_scale_names = input_data.design_scale_names |
|
self._loc_names = input_data.loc_names |
|
self._scale_names = input_data.scale_names |
|
self._x = input_data.x |
|
self._size_factors = input_data.size_factors |
|
self._cast_dtype = input_data.cast_dtype |
|
self._chunk_size_genes = input_data.chunk_size_genes |
|
self._chunk_size_cells = input_data.chunk_size_cells |
|
self._xh_loc = np.matmul(self.design_loc, self.constraints_loc) |
|
self._xh_scale = np.matmul(self.design_scale, self.constraints_scale) |
There are basically three options in my opinion:
- Remove
InputDataGLM entirely: We only use it as a container now that does some type checks in the beginning which we could do statically in utils anyway.
- Keep
InputDataGLM and reference the attributes in the model properties, i.e. model.input_data.<attribute> when calling model.attribute
- override
model.__getattr__ like so:
def __getattr__(self, attr: str):
return self.input_data.__getattribute__(attr)
I'm in favour of 2 but as a way in between to keep this support, maybe 3 would be an elegant solution for now.
Please advise @ilan-gold @davidsebfischer
Need to discuss how we handle this:
batchglm/batchglm/models/base_glm/model.py
Lines 66 to 82 in 6048230
There are basically three options in my opinion:
InputDataGLMentirely: We only use it as a container now that does some type checks in the beginning which we could do statically inutilsanyway.InputDataGLMand reference the attributes in the model properties, i.e.model.input_data.<attribute>when callingmodel.attributemodel.__getattr__like so:I'm in favour of 2 but as a way in between to keep this support, maybe 3 would be an elegant solution for now.
Please advise @ilan-gold @davidsebfischer