LEFTIST¶
Agnostic Local Explanation for Time Series Classification (LEFTIST) by Guilleme et al. (2019) [1] adapted LIME for time series classification and proposed to use prefixed (both the length and the position) shapelets as the interpretable components and provide the feature importance of each shapelet.
The code in TSInterpret is based on the authors implementation .
[1] Guillemé, Maël, et al. "Agnostic local explanation for time series classification." 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2019.
import pickle
import numpy as np
import torch
from ClassificationModels.CNN_T import ResNetBaseline, UCRDataset
from tslearn.datasets import UCR_UEA_datasets
import sklearn
Dataset¶
dataset='ECG5000'
Load Data¶
Load Data and reshape the data to fit a 1D-Conv ResNet. Note that the input for a 1D-Conv Resnet hat the shape (batch, features, timesteps).
train_x,train_y, test_x, test_y=UCR_UEA_datasets().load_dataset(dataset)
train_x = train_x.reshape(-1,1, train_x.shape[-2])
test_x = test_x.reshape(-1,1, test_x.shape[-2])
enc1=sklearn.preprocessing.OneHotEncoder(sparse=False).fit(np.vstack((train_y.reshape(-1,1),test_y.reshape(-1,1))))
pickle.dump(enc1,open(f'../../ClassificationModels/models/{dataset}/OneHotEncoder.pkl','wb'))
train_y=enc1.transform(train_y.reshape(-1,1))
test_y=enc1.transform(test_y.reshape(-1,1))
Model Training¶
Loads a pretrained 1D-Conv ResNet.
n_pred_classes =train_y.shape[1]
train_dataset = UCRDataset(train_x.astype(np.float64),train_y.astype(np.int64))
test_dataset = UCRDataset(test_x.astype(np.float64),test_y.astype(np.int64))
train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=16,shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset,batch_size=1,shuffle=False)
model = ResNetBaseline(in_channels=1, num_pred_classes=n_pred_classes)
model.load_state_dict(torch.load(f'../../ClassificationModels/models/{dataset}/ResNet'))
model.eval()
ResNetBaseline( (layers): Sequential( (0): ResNetBlock( (layers): Sequential( (0): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(1, 64, kernel_size=(8,), stride=(1,)) (1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) (1): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(64, 64, kernel_size=(5,), stride=(1,)) (1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) (2): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(64, 64, kernel_size=(3,), stride=(1,)) (1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) ) (residual): Sequential( (0): Conv1dSamePadding(1, 64, kernel_size=(1,), stride=(1,)) (1): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): ResNetBlock( (layers): Sequential( (0): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(64, 128, kernel_size=(8,), stride=(1,)) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) (1): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(128, 128, kernel_size=(5,), stride=(1,)) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) (2): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(128, 128, kernel_size=(3,), stride=(1,)) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) ) (residual): Sequential( (0): Conv1dSamePadding(64, 128, kernel_size=(1,), stride=(1,)) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (2): ResNetBlock( (layers): Sequential( (0): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(128, 128, kernel_size=(8,), stride=(1,)) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) (1): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(128, 128, kernel_size=(5,), stride=(1,)) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) (2): ConvBlock( (layers): Sequential( (0): Conv1dSamePadding(128, 128, kernel_size=(3,), stride=(1,)) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() ) ) ) ) ) (final): Linear(in_features=128, out_features=5, bias=True) )
Interpretability Algorithm¶
Using a interpretability algorithm consists of 4 steps:
1. Load the Interpretability Method
2. Instaniate the Method with the desired Parameters
3. Call the explain Method
4. Plot the results
2. Initialization¶
LEFTIST works on all type of black-box classificators and can therefore be used with PyTorch (PYT), TensorFlow (TF), Sklearn (SK) and predict function, as long as the predict function returns a probability distribution. The Initialization takes the following arguments:
model_to_explain
: The model to be explaines.data
: Reference Set of type (X_train, y_train).mode
: Second dimension 'time' or 'feat'.backend
: TF, PYT, SK or func.transform_name
: Name of transformer.* `uniform`: Mean Transform * `straight_line`: StraightlineTransform * `background`: RandBackgroundTransform
learning_process_name
: 'Lime' or 'Shap'.nb_interpretable_feature
: number of desired features.
explained_instance = train_x[1]
nb_interpretable_feature = 10
explanation_size = 2
nb_neighbors = 1000
learning_process_name = 'Lime'
transform_name = 'straight_line'
model_to_explain=model
from TSInterpret.InterpretabilityModels.leftist.leftist import LEFTIST
2023-06-15 10:59:13.237750: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-06-15 10:59:15.142855: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory 2023-06-15 10:59:15.142970: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory 2023-06-15 10:59:15.142981: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
leftist = LEFTIST(model_to_explain,(test_x,None),mode='feat', backend='PYT', \
learning_process_name='Lime',transform_name='straight_line')
3. Call the explain method.¶
Prepare the instance and the predicted label of the instance as parameters for the explain methods.
model_to_explain
: The model to be explaines.nb_neighbors
: Number if neighbors used in the calculatuions.idx_label
: index of label to explain. If None, return an explanation for each label.explanation_size
: Size of desired explanation.
explanations = leftist.explain(np.array(explained_instance))
4. Visualize¶
Visualize the explanation by calling .plot(). Parameters are the original instance to be explained and the explanation.
leftist.plot(np.array([explained_instance]),np.array([explanations[1]]),save='LEFTIST.png')
feat NOT Time mode -1 1 0
Additional Examples, e.g. for the use with LSTM or TF can be found here.