TSR
FeatureAttribution(mlmodel, mode)
Bases: InterpretabilityBase
Abstract class to implement custom interpretability methods
mlmodel:
Methods
explain: explains a instance Returns
None
explain()
abstractmethod
Explains instance or model. Parameters
instance: np.array Not encoded and not normalised factual examples in two-dimensional shape (m, n). Returns
pd.DataFrame Encoded and normalised counterfactual examples.
plot(item, exp, figsize=(6.4, 4.8), heatmap=False, normelize_saliency=True, vmin=-1, vmax=1, save=None)
Plots expalantion on the explained Sample. Parameters
instance: np.array timeseries instance in two-dimensional shape (m, n). exp: expalantaion Returns
matplotlib.pyplot.Figure Visualization of Explanation
TODO CHeck with all typer
Saliency(model, NumTimeSteps, NumFeatures, method='GRAD', mode='time', normalize=True)
Bases: FeatureAttribution
Base Method for Saliency Calculation based on [1]. Please use the designated Subclasses SaliencyMethods_PYT.py for PyTorch explanations and SaliencyMethods_TF.py for Tensforflow explanations.
References
[1] Ismail, Aya Abdelsalam, et al. "Benchmarking deep learning interpretability in time series predictions." Advances in neural information processing systems 33 (2020): 6441-6452.
PARAMETER | DESCRIPTION |
---|---|
model |
model to be explained.
TYPE:
|
NumTimeSteps |
number of timesteps.
TYPE:
|
NumFeatures |
number of features.
TYPE:
|
method |
Saliency Method to be used.
TYPE:
|
mode |
Second dimension 'time'->
TYPE:
|
normalize |
Wheather or not to normalize the results
TYPE:
|
plot(item, exp, figsize=(6.4, 4.8), heatmap=False, save=None)
Plots explanation on the explained Sample.
PARAMETER | DESCRIPTION |
---|---|
item |
instance to be explained,if
TYPE:
|
exp |
explanation, ,if
TYPE:
|
figsize |
desired size of plot.
TYPE:
|
heatmap |
'True' if only heatmap, otherwise 'False'.
TYPE:
|
save |
Path to save figure.
TYPE:
|