uqmodels.modelization.DL_estimator package
Submodules
uqmodels.modelization.DL_estimator.Lstm_WS_ED module
- class uqmodels.modelization.DL_estimator.Lstm_WS_ED.LSTM_ED_Generator(X, y, metamodel, batch_min=64, shuffle=True, train=True)[source]
Bases:
PyDataset
- class uqmodels.modelization.DL_estimator.Lstm_WS_ED.Lstm(model_parameters, model_specifications, architecture_parameters, training_parameters={})[source]
Bases:
UQEstimator- factory(X, y, mask=None, cut_param=None, fit_rescale=True, causality_remove=None, redundancy=None, only_fit_scaler=False)[source]
Feature factory Reshape and redundundization (Moving window embedding representation)
- Parameters:
X (_type_) – X list contains (X_ctx, X_seq)
y (_type_) – Raw Y
mask (_type_, optional) – Mask of non-data
cut_param (_type_, optional) – cut paramaters on y distrubution
- Returns:
model Inputs, Targets and mask.
- Return type:
(Inputs,targets,mask)
- fit(Inputs, Targets=None, validation_data=None, epochs=None, steps_per_epoch=None, b_s=None, l_r=None, list_loss=None, param_loss=None, shuffle=True, sample_weight=None, verbose=None, metrics=None, callbacks=None, generator=None, validation_freq=1, **kwargs)[source]
Fit UQestimator using training data. :param X: train features :param y: train targets/observations
- get_params()[source]
Get parameters for this estimator.
- Parameters:
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
params – Parameter names mapped to their values.
- Return type:
dict
- modify_dropout(dp)[source]
Method to modify dp : have to be extend as API modification to model features.
- Parameters:
dp (_type_) – ddropout rate.
- plot_metrics(name_loss='val_loss')[source]
Plot metrics values recovers form tensorflow metrics callback
- Parameters:
name_loss (str, optional) – metrics to visualisze.
- predict(Inputs, n_ech=6, mask_h=0, mask_m=[0], generator=None, **kwargs)[source]
Predict procedure Maybe partly redundant to NN fiting procedure : Refactoring needed :param Inputs: Model inputs :type Inputs: List of NN features :param n_ech: Number of MC-DP inferences. :type n_ech: int, optional
- Returns:
Meta-model output tuple of (Prediction,Var_A,Var_E)
- Return type:
output
- set_fit_request(*, Inputs: bool | None | str = '$UNCHANGED$', Targets: bool | None | str = '$UNCHANGED$', b_s: bool | None | str = '$UNCHANGED$', callbacks: bool | None | str = '$UNCHANGED$', epochs: bool | None | str = '$UNCHANGED$', generator: bool | None | str = '$UNCHANGED$', l_r: bool | None | str = '$UNCHANGED$', list_loss: bool | None | str = '$UNCHANGED$', metrics: bool | None | str = '$UNCHANGED$', param_loss: bool | None | str = '$UNCHANGED$', sample_weight: bool | None | str = '$UNCHANGED$', shuffle: bool | None | str = '$UNCHANGED$', steps_per_epoch: bool | None | str = '$UNCHANGED$', validation_data: bool | None | str = '$UNCHANGED$', validation_freq: bool | None | str = '$UNCHANGED$', verbose: bool | None | str = '$UNCHANGED$') Lstm
Request metadata passed to the
fitmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed tofitif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it tofit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
Inputs (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Inputsparameter infit.Targets (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Targetsparameter infit.b_s (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
b_sparameter infit.callbacks (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
callbacksparameter infit.epochs (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
epochsparameter infit.generator (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
generatorparameter infit.l_r (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
l_rparameter infit.list_loss (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
list_lossparameter infit.metrics (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
metricsparameter infit.param_loss (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
param_lossparameter infit.sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
sample_weightparameter infit.shuffle (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
shuffleparameter infit.steps_per_epoch (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
steps_per_epochparameter infit.validation_data (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
validation_dataparameter infit.validation_freq (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
validation_freqparameter infit.verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
verboseparameter infit.
- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, Inputs: bool | None | str = '$UNCHANGED$', generator: bool | None | str = '$UNCHANGED$', mask_h: bool | None | str = '$UNCHANGED$', mask_m: bool | None | str = '$UNCHANGED$', n_ech: bool | None | str = '$UNCHANGED$') Lstm
Request metadata passed to the
predictmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topredictif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topredict.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
Inputs (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Inputsparameter inpredict.generator (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
generatorparameter inpredict.mask_h (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
mask_hparameter inpredict.mask_m (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
mask_mparameter inpredict.n_ech (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
n_echparameter inpredict.
- Returns:
self – The updated object.
- Return type:
object
uqmodels.modelization.DL_estimator.baseline_models module
- uqmodels.modelization.DL_estimator.baseline_models.cnn_mlp(dim_dyn, dim_target, size_window=40, n_windows=10, step=1, dim_chan=1, dim_z=50, type_output='MC_Dropout', dp=0.08, name='')[source]
- CNN processing with timed distributed MLP
- [ | | | ] *10
mlp mlp mlp val val val
- Parameters:
dim_dyn (_type_) – _description_
dim_target (_type_) – _description_
size_window (int, optional) – _description_. Defaults to 40.
n_windows (int, optional) – _description_. Defaults to 10.
step (int, optional) – _description_. Defaults to 1.
dim_chan (int, optional) – _description_. Defaults to 1.
dim_z (int, optional) – _description_. Defaults to 50.
name (str, optional) – _description_. Defaults to “”.
- Returns:
_description_
- Return type:
_type_
uqmodels.modelization.DL_estimator.data_embedding module
- class uqmodels.modelization.DL_estimator.data_embedding.Conv1D(*args, **kwargs)[source]
Bases:
Conv1D
- class uqmodels.modelization.DL_estimator.data_embedding.Conv2D(*args, **kwargs)[source]
Bases:
Conv2D
- class uqmodels.modelization.DL_estimator.data_embedding.DataEmbedding_ITS(*args, **kwargs)[source]
Bases:
Layer
- class uqmodels.modelization.DL_estimator.data_embedding.Data_embedding_TS(*args, **kwargs)[source]
Bases:
Layer
- class uqmodels.modelization.DL_estimator.data_embedding.Dropout(*args, **kwargs)[source]
Bases:
Dropout
- class uqmodels.modelization.DL_estimator.data_embedding.Factice_Time_Extension(*args, **kwargs)[source]
Bases:
Layer
- class uqmodels.modelization.DL_estimator.data_embedding.FixedEmbedding(*args, **kwargs)[source]
Bases:
Layer
- class uqmodels.modelization.DL_estimator.data_embedding.Mouving_Window_Embedding(*args, **kwargs)[source]
Bases:
Layer
- class uqmodels.modelization.DL_estimator.data_embedding.Mouving_Windows_Embedding(*args, **kwargs)[source]
Bases:
Layer
- class uqmodels.modelization.DL_estimator.data_embedding.Mouving_conv_Embedding(*args, **kwargs)[source]
Bases:
Layer- call(inputs)[source]
_summary_
- Parameters:
inputs (_type_) – _description_
mode (str, optional) – _description_. Defaults to “encoder”.
- Returns:
_description_
- Return type:
_type_
- classmethod from_config(config)[source]
Creates an operation from its config.
This method is the reverse of get_config, capable of instantiating the same operation from the config dictionary.
Note: If you override this method, you might receive a serialized dtype config, which is a dict. You can deserialize it as follows:
```python if “dtype” in config and isinstance(config[“dtype”], dict):
policy = dtype_policies.deserialize(config[“dtype”])
- Parameters:
config – A Python dictionary, typically the output of get_config.
- Returns:
An operation instance.
- class uqmodels.modelization.DL_estimator.data_embedding.PositionalEmbedding(*args, **kwargs)[source]
Bases:
Layer
uqmodels.modelization.DL_estimator.data_generator module
- class uqmodels.modelization.DL_estimator.data_generator.Folder_Generator(X, y, metamodel, batch=64, shuffle=True, train=True, random_state=None, dtype=<class 'numpy.float32'>)[source]
Bases:
PyDataset
uqmodels.modelization.DL_estimator.loss module
uqmodels.modelization.DL_estimator.lstm_ed module
- class uqmodels.modelization.DL_estimator.lstm_ed.Lstm_ED_UQ(model_parameters, factory_parameters={}, training_parameters={}, type_output=None, rescale=False, n_ech=5, train_ratio=0.9, name='', random_state=None)[source]
Bases:
NN_UQ- set_fit_request(*, Inputs: bool | None | str = '$UNCHANGED$', Targets: bool | None | str = '$UNCHANGED$', test: bool | None | str = '$UNCHANGED$', train: bool | None | str = '$UNCHANGED$', training_parameters: bool | None | str = '$UNCHANGED$', verbose: bool | None | str = '$UNCHANGED$') Lstm_ED_UQ
Request metadata passed to the
fitmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed tofitif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it tofit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
Inputs (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Inputsparameter infit.Targets (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Targetsparameter infit.test (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
testparameter infit.train (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
trainparameter infit.training_parameters (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
training_parametersparameter infit.verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
verboseparameter infit.
- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, generator: bool | None | str = '$UNCHANGED$', type_output: bool | None | str = '$UNCHANGED$') Lstm_ED_UQ
Request metadata passed to the
predictmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topredictif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topredict.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
generator (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
generatorparameter inpredict.type_output (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
type_outputparameter inpredict.
- Returns:
self – The updated object.
- Return type:
object
- uqmodels.modelization.DL_estimator.lstm_ed.build_lstm_stacked(size_window=20, n_windows=5, step=1, dim_target=3, dim_chan=1, dim_horizon=5, dim_ctx=18, dim_z=200, dp=0.05, dp_rec=0.03, type_output=None, num_lstm_enc=1, num_lstm_dec=1, k_reg=(1e-05, 1e-05), layers_enc=[75, 150, 75], layers_dec=[150, 75], list_strides=[2, 1], list_kernels=None, list_filters=None, with_ctx_input=True, with_convolution=True, dim_dyn=None, random_state=None, **kwarg)[source]
Builder for LSTM ED UQ with convolutif preprocessing for lag values
- Parameters:
size_window (int, optional) – Size of window for lag values. Defaults to 10.
n_windows (int, optional) – Number of window in past. Defaults to 5.
step (int, optional) – step between windows. Defaults to 1.
dim_target (int, optional) – dimension of TS. Defaults to 1.
dim_chan (int, optional) – Number of channel of TS. Defaults to 1.
dim_horizon (int, optional) – futur_horizon to predict. Defaults to 3.
dim_ctx (int, optional) – Number of ctx_features. Defaults to 20.
dim_z (int, optional) – Size of latent sapce. Defaults to 100.
layers_enc (list, optional) – size of MLP preprocessing
[150]. ((after concatenation of past values embeding + ctx) Defaults to)
layers_dec (list, optional) – size of MLP interpretor. Defaults to 2.
dp (float, optional) – dropout. Defaults to 0.05.
dp_rec (float, optional) – transformer dropout. Defaults to 0.1.
k_reg (tuple, optional) – _description_. Defaults to (0.00001, 0.00001).
with_positional_embedding (bool, optional) – _description_. Defaults to False.
with_ctx_input (bool, optional) – Expect ctx features in addition to lag. Defaults to True.
with_convolution (bool, optional) – use convolution rather than
True. (whole lag values in the windows. Defaults to)
type_output (_type_, optional) – mode of UQ (see NN_UQ). Defaults to None.
random_state (bool) – handle experimental random using seed.
- Returns:
multi-step forecaster with UQ
- Return type:
transformer
- uqmodels.modelization.DL_estimator.lstm_ed.get_params_dict(dim_ctx, dim_dyn, dim_target, size_window=20, n_windows=5, dim_horizon=5, step=1, dim_chan=1, dp=0.05, dp_rec=0.05, dim_z=50, k_reg=(1e-06, 1e-06), num_lstm_enc=1, num_lstm_dec=1, layers_enc=[150, 75], layers_dec=[200, 125, 75], list_strides=[2, 1, 1, 1], list_filters=[128, 128, 128], list_kernels=None, with_convolution=True, with_ctx_input=True, n_ech=3, type_output='MC_Dropout')[source]
uqmodels.modelization.DL_estimator.metalayers module
- class uqmodels.modelization.DL_estimator.metalayers.Add_query_to_Z_Processing_with_state(*args, **kwargs)[source]
Bases:
Layer
- class uqmodels.modelization.DL_estimator.metalayers.Double_Moving_slice_layer(*args, **kwargs)[source]
Bases:
LayerLayer that apply double moving_slice_map
- Parameters:
Layer (_type_) – _description_
- class uqmodels.modelization.DL_estimator.metalayers.EDLProcessing(*args, **kwargs)[source]
Bases:
Layer
- class uqmodels.modelization.DL_estimator.metalayers.LSTMCellMidsize(*args, **kwargs)[source]
Bases:
LSTMCellHack to take into accout state in cell : size in dim_z*2 | size out dim_z
- Parameters:
LSTMCell (_type_) – _description_
- class uqmodels.modelization.DL_estimator.metalayers.LSTMCellReturnCellState(*args, **kwargs)[source]
Bases:
LSTMCellLayer LSTM returning output and state jointly
- Parameters:
LSTMCell (_type_) – _description_
- uqmodels.modelization.DL_estimator.metalayers.LSTM_DProcessing(n_step, dim_z, flag_mc, dp=0.05, dp_r=0.02, l1_l2_reg=(1e-07, 1e-07), random_state=None)[source]
Decoder Processing as block : aim to make temporal projection (Usefull to hold query information) Input (batch,n_step,dim_z*2) output (batch,n_step,dim_z) with Zdecoding latent space)
- Parameters:
n_step (_type_) – _description_
dim_z (_type_) – _description_
flag_mc (_type_) – _description_
dp (float, optional) – _description_. Defaults to 0.05.
dp_r (float, optional) – _description_. Defaults to 0.02.
l1_l2_reg (tuple, optional) – _description_. Defaults to (0.0000001, 0.0000001).
random_state (bool) – handle experimental random using seed.
- Returns:
Keras model as LSTM Decoder block
- Return type:
Model
- uqmodels.modelization.DL_estimator.metalayers.LSTM_EProcessing(n_step, dim_in, dim_z, flag_mc, dp=0.05, dp_r=0.02, l1_l2_reg=(1e-07, 1e-07), random_state=None)[source]
Encoder Processing as block aim to capture dynamic Input (batch,n_step,dim_in) output (batch,n_step,dim_z*2) with Z_h and Z_state concatenate)
- Parameters:
n_step (_type_) – _description_
dim_in (_type_) – _description_
dim_z (_type_) – _description_
flag_mc (_type_) – _description_
dp (float, optional) – _description_. Defaults to 0.05.
dp_r (float, optional) – _description_. Defaults to 0.02.
l1_l2_reg (tuple, optional) – _description_. Defaults to (0.0000001, 0.0000001).
random_state (bool) – handle experimental random using seed.
- Returns:
Keras model as LSTM Encoder block
- Return type:
Model
- class uqmodels.modelization.DL_estimator.metalayers.Moving_slice_layer(*args, **kwargs)[source]
Bases:
LayerLayer that apply moving_slice_map
- class uqmodels.modelization.DL_estimator.metalayers.ProbabilisticProcessing(*args, **kwargs)[source]
Bases:
Layer_summary_
- Parameters:
Layer (_type_) – _description_
- class uqmodels.modelization.DL_estimator.metalayers.RNN_states_in_inputs(*args, **kwargs)[source]
Bases:
RNNRNN class dispatching jointly [H,C]
- Parameters:
RNN (_type_) – _description_
- call(inputs, mask=None, training=None, initial_state=None, constants=None)[source]
_summary_
- Parameters:
inputs (_type_) – _description_
mask (_type_, optional) – _description_. Defaults to None.
training (_type_, optional) – _description_. Defaults to None.
initial_state (_type_, optional) – _description_. Defaults to None.
constants (_type_, optional) – _description_. Defaults to None.
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.metalayers.Tconv_block_1D(inputs, dim_out, filters=32, kernel=2, strides=2, dp=0.02, flag_mc=False, random_state=None)[source]
- uqmodels.modelization.DL_estimator.metalayers.Tconv_block_2D(inputs, dim_out, filters=32, kernel=5, strides=(2, 1), dp=0.02, flag_mc=False, random_state=None)[source]
- uqmodels.modelization.DL_estimator.metalayers.cnn_dec(size_subseq_dec, dim_out, dim_chan=1, type_output=None, k1=10, min_logvar=-6, dim_z=100, dp=0.01, random_state=None, **kwarg)[source]
Warning depreciated CNN_dec implementation
- Parameters:
size_subseq_dec (_type_) – _description_
dim_out (_type_) – _description_
dim_chan (int, optional) – _description_. Defaults to 1.
type_output (_type_, optional) – _description_. Defaults to None.
k1 (int, optional) – _description_. Defaults to 10.
min_logvar (int, optional) – _description_. Defaults to -6.
dim_z (int, optional) – _description_. Defaults to 100.
dp (float, optional) – _description_. Defaults to 0.01.
random_state (_type_, optional) – _description_. Defaults to None.
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.metalayers.cnn_dec_1D(size_subseq_dec=10, dim_out=10, dim_z=100, type_output=None, k=32, dp=0.05, random_state=None, **kwarg)[source]
_summary_
- Parameters:
size_subseq_dec (int, optional) – _description_. Defaults to 10.
dim_out (int, optional) – _description_. Defaults to 10.
dim_z (int, optional) – _description_. Defaults to 100.
type_output (_type_, optional) – _description_. Defaults to None.
k (int, optional) – _description_. Defaults to 32.
dp (float, optional) – _description_. Defaults to 0.05.
random_state (bool) – handle experimental random using seed.
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.metalayers.cnn_dec_bis(size_subseq_dec, dim_out, dim_chan=1, type_output=None, min_logvar=-6, list_filters=[64, 64], strides=(1, 1), list_kernels=[4, 4], dim_z=200, random_state=None, **kwarg)[source]
- uqmodels.modelization.DL_estimator.metalayers.cnn_enc(size_subseq_enc, dim_out, dim_chan=1, k1=10, reduction_x1=8, reduction_x2=1, dim_z=100, dp=0.02, random_state=None, **kwarg)[source]
Warning depreciated CNN_enc implementation
- Parameters:
size_subseq_enc (_type_) – _description_
dim_out (_type_) – _description_
dim_chan (int, optional) – _description_. Defaults to 1.
k1 (int, optional) – _description_. Defaults to 10.
reduction_x1 (int, optional) – _description_. Defaults to 8.
reduction_x2 (int, optional) – _description_. Defaults to 1.
dim_z (int, optional) – _description_. Defaults to 100.
dp (float, optional) – _description_. Defaults to 0.02.
random_state (_type_, optional) – _description_. Defaults to None.
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.metalayers.cnn_enc_1D(size_subseq_enc, dim_out, dim_z, dim_lt, type_output=None, k=32, dp=0.05, random_state=None, **kwarg)[source]
cnn_enc_1D layers
- Parameters:
size_subseq_enc (int, optional) – _description_. Defaults to 100.
dim_out (int, optional) – _description_. Defaults to 10.
dim_z (int, optional) – _description_. Defaults to 100.
dim_lt (int, optional) – _description_. Defaults to 100.
type_output (_type_, optional) – _description_. Defaults to None.
k (int, optional) – _description_. Defaults to 32.
dp (float, optional) – _description_. Defaults to 0.05.
random_state (bool) – handle experimental random using seed.
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.metalayers.cnn_enc_bis(size_subseq_enc=60, dim_target=52, dim_chan=4, list_filters=[64, 64, 32], list_kernels=[(10, 3), 10, 10], list_strides=[(2, 1), (2, 1), (2, 1)], type_output=None, block='2D', dim_z=200, dp=0.02, random_state=None, **kwarg)[source]
Produce a cnn_enn subpart of a deep learning predictor
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.metalayers.conv_block_1D(inputs, dim_chan, filters=32, kernel=2, strides=2, dp=0.02, flag_mc=False, random_state=None)[source]
- uqmodels.modelization.DL_estimator.metalayers.conv_block_2D(inputs, dim_chan, filters=32, kernel=5, strides=(2, 1), dp=0.02, flag_mc=False, random_state=None)[source]
- uqmodels.modelization.DL_estimator.metalayers.dense2D_enc_dec(size_subseq_enc, size_subseq_dec, dim_in, dim_out, layers_size=[100, 50], dim_z=100, dp=0.05, enc_only=False, type_output=None, random_state=None)[source]
- uqmodels.modelization.DL_estimator.metalayers.get_cnn_dec_params(dim_target, size_subseq_dec=1, dim_z=50, random_state=None)[source]
Produce dict params that can instanciate cnn_dec_bloc :param dim_target: dimension of motifs to convolute :type dim_target: _type_ :param size_subseq_dec: length of motifs to convulute :type size_subseq_dec: int, optional :param dim_z: latent dimension :type dim_z: int, optional :param random_state: handle experimental random using seed. :type random_state: bool
- uqmodels.modelization.DL_estimator.metalayers.get_cnn_enc_params(dim_target, size_subseq_enc=1, dim_z=50, random_state=None)[source]
Produce dict params that can instanciate cnn_enn_bloc :param dim_target: dimension of motifs to convolute :type dim_target: _type_ :param size_subseq_enc: length of motifs to convulute :type size_subseq_enc: int, optional :param dim_z: latent dimension :type dim_z: int, optional :param random_state: handle experimental random using seed. :type random_state: bool
- uqmodels.modelization.DL_estimator.metalayers.mlp(dim_in=10, dim_out=1, layers_size=[100, 50], name='', dp=0.01, with_mc_dp=True, type_output=None, logvar_min=-10, regularizer_W=(1e-05, 1e-05), shape_2D=None, shape_2D_out=None, random_state=None, **kwargs)[source]
Generate a keras MLP model to make preprocessing or head subpart”.
- Parameters:
dim_in (int) – Input dimension, erase by shape_2D if 2D input_size
dim_out (int or None) – Input dimension, if None take the last layers_size values
layers_size (list of in, optional) – List of size of layers. Defaults to [100, 50].
name (str, optional) – Name of model. Defaults to “”.
dp (float, optional) – Percentage of dropout. Defaults to 0.01.
type_output (_type_, optional) – Specify Head last layers among
['None' – pred,MC_Dropout:(Pred,var),”EDL”:(Pred,mu,alpha,beta) ]
logvar_min (int, optional) – Cut off for small variance estimations
regularizer_W (tuple, optional) – Regularisation on Dense layers. Defaults to (0.00001, 0.00001).
shape_2D (tupple or None, optional) – if intput shape is 2D. Defaults to None.
shape_2D_out
random_state (bool) – handle experimental random using seed.
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.metalayers.moving_slice_map(inputs, n_step, padding, kick_off=0, depth_slice=1)[source]
Apply Layers on n_step slices of input with padding :param input: Input tensors :type input: TF.tensor :param Layers: Submodels :type Layers: Keras.model :param n_step: N_slide :type n_step: int :param padding: padding :type padding: int
- uqmodels.modelization.DL_estimator.metalayers.stack_and_roll_layer(inputs, size_window, size_subseq, padding, name='', format='tf_slice')[source]
Layers that produce a stack rolled layers to produce a batch of subsequence
- Parameters:
inputs (_type_) – layers
size_window (_type_) – size of subseqeuence
size_subseq (_type_) – _description_
padding (_type_) – _description_
name (str, optional) – _description_. Defaults to “”.
format (str, optional) – _description_. Defaults to “tf_slice”.
- Returns:
_description_
- Return type:
_type_
uqmodels.modelization.DL_estimator.neural_network_UQ module
- uqmodels.modelization.DL_estimator.neural_network_UQ.Deterministic_prediction(Inputs, model, ddof, generator=False, type_output=None)[source]
Prediction (mu,sigma) of Inputs using Deterministic UQ-paragim (Ex : EDL)
- Parameters:
model (tf.model) – neural network
n_ech (n_draw) – number of dropout drawn
Inputs (_type_) – Inputs of model
ddof (_type_) – ddof
generator (bool, optional) – specify if Inputs is generator or not. Defaults to False.
type_output – type_output (EDL)
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.neural_network_UQ.Drawn_based_prediction(Inputs, model, n_ech, ddof, generator=False, type_output='MC_Dropout')[source]
Prediction (mu,sigma) of Inputs using Drawn_based UQ-paragim (Ex : MC_dropout)
- Parameters:
model (tf.model) – neural network
n_ech (n_draw) – number of dropout drawn
Inputs (_type_) – Inputs of model
ddof (_type_) – ddof
generator (bool, optional) – specify if Inputs is generator or not. Defaults to False.
- Returns:
_description_
- Return type:
_type_
- uqmodels.modelization.DL_estimator.neural_network_UQ.Ensemble_based_prediction(Inputs, models, ddof, generator=False, type_output=None)[source]
Prediction (mu,sigma) of Inputs using Ensemble_based UQ-paradign
- Parameters:
model (tf.model) – neural network
n_ech (n_draw) – number of dropout drawn
Inputs (_type_) – Inputs of model
ddof (_type_) – ddof
generator (bool, optional) – specify if Inputs is generator or not. Defaults to False.
type_output – type_output (curently useless)
- Returns:
_description_
- Return type:
_type_
- class uqmodels.modelization.DL_estimator.neural_network_UQ.NN_UQ(model_initializer, model_parameters, factory_parameters={}, training_parameters={}, type_output=None, rescale=False, n_ech=5, train_ratio=0.9, var_min=1e-06, name='NN', random_state=None)[source]
Bases:
UQEstimatorNeural Network UQ
- basic_fit(Inputs, Targets, train=None, test=None, epochs=[1000, 1000], b_s=[100, 20], l_r=[0.01, 0.005], sample_w=None, verbose=1, list_loss=['mse'], metrics=None, generator=None, steps_per_epoch=None, shuffle=True, callbacks='default', validation_freq=1, param_loss=None, test_batch_size=None, **kwargs)[source]
- basic_predict(Inputs, n_ech=6, type_output='MC_Dropout', generator=None, test_batch_size=None, **kwarg)[source]
- build_loss(loss, param_loss=None)[source]
Build loss from str or loss and loss_parameters
- Parameters:
loss (_type_) – _description_
param_loss (_type_, optional) – _description_. Defaults to None.
- Returns:
_description_
- Return type:
_type_
- build_metrics(metrics)[source]
Build list of metrics from str or metrics.
- Parameters:
metrics (_type_) – _description_
- dataset_generator(Inputs, Targets, validation_data=None, batch_size=32, shuffle=False, generator=True, test_batch_size=None)[source]
Hold case with or without data generator
- Parameters:
Inputs (_type_) – _description_
Targets (_type_) – _description_
validation_data (_type_) – _description_
batch (_type_) – _description_
shuffle (_type_) – _description_
generator (_type_) – _description_
- Returns:
_description_
- Return type:
_type_
- fit(Inputs, Targets, train=None, test=None, training_parameters=None, verbose=None, **kwargs)[source]
Fit UQestimator using training data. :param X: train features :param y: train targets/observations
- init_neural_network()[source]
apply model_initializer function with model_parameters and store in self.model
- predict(X, type_output=None, generator=None, **kwargs)[source]
Compute prediction (or provide None) and UQ-measure :param X: features
- Returns:
pred, UQ_measure
- set_fit_request(*, Inputs: bool | None | str = '$UNCHANGED$', Targets: bool | None | str = '$UNCHANGED$', test: bool | None | str = '$UNCHANGED$', train: bool | None | str = '$UNCHANGED$', training_parameters: bool | None | str = '$UNCHANGED$', verbose: bool | None | str = '$UNCHANGED$') NN_UQ
Request metadata passed to the
fitmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed tofitif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it tofit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
Inputs (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Inputsparameter infit.Targets (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Targetsparameter infit.test (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
testparameter infit.train (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
trainparameter infit.training_parameters (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
training_parametersparameter infit.verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
verboseparameter infit.
- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, generator: bool | None | str = '$UNCHANGED$', type_output: bool | None | str = '$UNCHANGED$') NN_UQ
Request metadata passed to the
predictmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topredictif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topredict.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
generator (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
generatorparameter inpredict.type_output (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
type_outputparameter inpredict.
- Returns:
self – The updated object.
- Return type:
object
- uqmodels.modelization.DL_estimator.neural_network_UQ.generate_K_fold_removing_index(n_model, k_fold, train, data_drop, random_state=None)[source]
Generate liste of idx to remove for k_fold deep ensemble procedure
- Parameters:
n_model (_type_) – Number of models
k_fold (_type_) – Number of fold
train (_type_) – train_flag_idx
data_drop (_type_) – % of data drop
random_state – handle experimental random using seed
- Returns:
list_sampletoremove idx of sample to remove of train for each submodel
- Return type:
_type_
- uqmodels.modelization.DL_estimator.neural_network_UQ.generate_train_test(len_, train_ratio=0.92, last_val=True, random_state=None)[source]
uqmodels.modelization.DL_estimator.transformer_ed module
- class uqmodels.modelization.DL_estimator.transformer_ed.Dense(*args, **kwargs)[source]
Bases:
Dense
- class uqmodels.modelization.DL_estimator.transformer_ed.Dropout(*args, **kwargs)[source]
Bases:
Dropout
- class uqmodels.modelization.DL_estimator.transformer_ed.LayerNormalization(*args, **kwargs)[source]
Bases:
LayerNormalization
- class uqmodels.modelization.DL_estimator.transformer_ed.MultiHeadAttention(*args, **kwargs)[source]
Bases:
MultiHeadAttention
- class uqmodels.modelization.DL_estimator.transformer_ed.TransformerDecoder(*args, **kwargs)[source]
Bases:
LayerTransformer Encoder Layer from https://keras.io/examples/audio/transformer_asr/
- call(enc_out, target, training=None)[source]
_summary_
- Parameters:
enc_out (_type_) – _description_
target (_type_) – _description_
- Returns:
_description_
- Return type:
_type_
- causal_attention_mask(batch_size, n_dest, n_src, dim_horizon, dtype)[source]
Masks the upper half of the dot product matrix in self attention.
This prevents flow of information from future tokens to current token. 1’s in the lower triangle, counting from the lower right corner.
- classmethod from_config(config)[source]
Creates an operation from its config.
This method is the reverse of get_config, capable of instantiating the same operation from the config dictionary.
Note: If you override this method, you might receive a serialized dtype config, which is a dict. You can deserialize it as follows:
```python if “dtype” in config and isinstance(config[“dtype”], dict):
policy = dtype_policies.deserialize(config[“dtype”])
- Parameters:
config – A Python dictionary, typically the output of get_config.
- Returns:
An operation instance.
- class uqmodels.modelization.DL_estimator.transformer_ed.TransformerEncoder(*args, **kwargs)[source]
Bases:
LayerTransformer Encoder Layer from https://keras.io/examples/audio/transformer_asr/
- call(inputs, training=None)[source]
_summary_
- Parameters:
inputs (_type_) – _description_
training (_type_) – _description_
- Returns:
_description_
- Return type:
_type_
- classmethod from_config(config)[source]
Creates an operation from its config.
This method is the reverse of get_config, capable of instantiating the same operation from the config dictionary.
Note: If you override this method, you might receive a serialized dtype config, which is a dict. You can deserialize it as follows:
```python if “dtype” in config and isinstance(config[“dtype”], dict):
policy = dtype_policies.deserialize(config[“dtype”])
- Parameters:
config – A Python dictionary, typically the output of get_config.
- Returns:
An operation instance.
- class uqmodels.modelization.DL_estimator.transformer_ed.Transformer_ED_UQ(model_parameters, factory_parameters={'factory_lag_lt': 0, 'factory_lag_st': 0}, training_parameters={}, type_output=None, rescale=False, n_ech=5, train_ratio=0.9, name='Lstm_stacked', random_state=None)[source]
Bases:
NN_UQTransformer_ED for forecasting with UQ : see build_transformer to check model parameters
- set_fit_request(*, Inputs: bool | None | str = '$UNCHANGED$', Targets: bool | None | str = '$UNCHANGED$', test: bool | None | str = '$UNCHANGED$', train: bool | None | str = '$UNCHANGED$', training_parameters: bool | None | str = '$UNCHANGED$', verbose: bool | None | str = '$UNCHANGED$') Transformer_ED_UQ
Request metadata passed to the
fitmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed tofitif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it tofit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
Inputs (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Inputsparameter infit.Targets (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
Targetsparameter infit.test (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
testparameter infit.train (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
trainparameter infit.training_parameters (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
training_parametersparameter infit.verbose (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
verboseparameter infit.
- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, generator: bool | None | str = '$UNCHANGED$', type_output: bool | None | str = '$UNCHANGED$') Transformer_ED_UQ
Request metadata passed to the
predictmethod.Note that this method is only relevant if
enable_metadata_routing=True(seesklearn.set_config()). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed topredictif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it topredict.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline. Otherwise it has no effect.- Parameters:
generator (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
generatorparameter inpredict.type_output (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
type_outputparameter inpredict.
- Returns:
self – The updated object.
- Return type:
object
- uqmodels.modelization.DL_estimator.transformer_ed.build_transformer(size_window=10, n_windows=5, step=1, dim_target=1, dim_chan=1, dim_horizon=3, dim_ctx=20, dim_z=100, num_heads=2, num_feed_forward=128, num_layers_enc=3, num_layers_dec=2, layers_enc=[150], layers_dec=[150, 75], dp=0.05, dp_rec=0.03, k_reg=(1e-05, 1e-05), list_strides=[2, 1], list_filters=None, list_kernels=None, dim_dyn=None, with_positional_embedding=False, with_ctx_input=True, with_convolution=True, type_output=None, random_state=None, **kwargs)[source]
Builder for Transformer ED with convolutive preprocessing
- Parameters:
size_window (int, optional) – Size of window for lag values. Defaults to 10.
n_windows (int, optional) – Number of window in past. Defaults to 5.
step (int, optional) – step between windows. Defaults to 1.
dim_target (int, optional) – dimension of TS. Defaults to 1.
dim_chan (int, optional) – Number of channel of TS. Defaults to 1.
dim_horizon (int, optional) – futur_horizon to predict. Defaults to 3.
dim_ctx (int, optional) – Number of ctx_features. Defaults to 20.
dim_z (int, optional) – Size of latent sapce. Defaults to 100.
num_heads (int, optional) – num of heads transformer. Defaults to 2.
num_feed_forward (int, optional) – feed_forward transfomer dimension. Defaults to 128.
num_layers_enc (int, optional) – num of transformer enc block
3. ((after concatenation of past values embeding + ctx) . Defaults to)
num_layers_dec (int, optional) – num of transformer dec block Defaults to 2.
layers_enc (list, optional) – size of MLP preprocessing
[150]. ((after concatenation of past values embeding + ctx) Defaults to)
layers_dec (list, optional) – size of MLP interpretor. Defaults to 2.
dp (float, optional) – dropout. Defaults to 0.05.
dp_t (float, optional) – transformer dropout. Defaults to 0.1.
k_reg (tuple, optional) – _description_. Defaults to (0.00001, 0.00001).
dim_dyn (int, None) – size of dyn inputs, if None consider dim_dyn have same size than dim target
with_positional_embedding (bool, optional) – _description_. Defaults to False.
with_ctx_input (bool, optional) – Expect ctx features in addition to lag. Defaults to True.
with_convolution (bool, optional) – use convolution rather than
True. (whole lag values in the windows. Defaults to)
type_output (_type_, optional) – mode of UQ (see NN_UQ). Defaults to None.
random_state (bool) – handle experimental random using seed.
- Returns:
multi-step forecaster with UQ
- Return type:
transformer
- uqmodels.modelization.DL_estimator.transformer_ed.get_params_dict(dim_ctx, dim_dyn, dim_target, dim_chan=1, size_window=20, n_windows=5, dim_horizon=5, dim_z=50, dp=0.05, dp_rec=0.02, num_heads=2, num_feed_forward=128, num_layers_enc=3, num_layers_dec=2, layers_enc=[75, 150, 75], layers_dec=[200, 125, 75], list_strides=[2, 1, 1, 1], list_filters=[128, 128, 128], list_kernels=None, with_convolution=True, with_ctx_input=True, n_ech=3, type_output='MC_Dropout', random_state=None)[source]
uqmodels.modelization.DL_estimator.utils module
- uqmodels.modelization.DL_estimator.utils.find_conv_kernel(window_initial, size_final, list_strides)[source]
Return size of kernel according to : window_initial : size of window size_final : size final list_strides : list of strides
return(list_kernel,list_strides)