Skip to content

Algorithm Overview

This page provides a high level overview on algorithms and their features currently implemented in TSInterpret.

Method Backend Type Data Training Set
COMTE 1 TF,PYT, SK, BB InstanceBased multi y
LEFTIST 2 TF,PYT, SK, BB FeatureAttribution uni y
NUN-CF 3 TF,PYT, SK InstanceBased uni y
SETS 4 TF InstanceBased uni & multi y
TSEvo 5 TF,PYT InstanceBased uni & multi y
TSR 6 TF,PYT FeatureAttribution multi n

BB: black-box, TF: tensorflow, PYT: PyTorch, SK: sklearn, uni: univariate time series, multi: multivariate time series, y: yes, n: no

NUN-CF Delany et al.3 proposed using a native guide to generate counterfatuals. The native guide is selected via K-nearest neighbor. For generating the counterfactual they offer three options: the native guide, the native guide with bary centering, and the counterfactual based on the native guide and class activation mapping.

COMTE Ates et al. 1 proposed COMTE as a heuristic pertubations approach for multivariate time series counterfactuals. The goal is to change a small number of features to obtain a counterfactual.

LEFTIST Agnostic Local Explanation for Time Series Classification by Guilleme et al.2 adapted LIME for time series classification task and propose to use prefixed (both the length and the position) shapelets as the interpretable components, and provide the feature importance of each shapelet.

SETS SETS 4 generates counterfactuals by perturbing previously extracted shapelets into the original (to be explained) tinmeseries.

TSEvo TSEvo 5 is an evolution-based counterfactual generation method for uni- and multivariate time series classification. In contrast to other methods counterfatuals are generated with the help of an multiobjective problem and multiple perturbation methods to take into account different time series properties (gaussian, frequency and dataset based).

TSR Temporal Saliency Rescaling 6 calculates the importance of each timestep, followed by the feature importance on basis of different Saliency Methods, both Back-propagation based and perturbation based. For a full list of implemented methods, we refer the reader to our code documentation. The implementation in TSInterpret is based on tf-explain 7, shap and captum [@kokhlikyan_captum_2020].


  1. Emre Ates, Burak Aksar, Vitus J. Leung, and Ayse K. Coskun. Counterfactual Explanations for Machine Learning on Multivariate Time Series Data. 2021 International Conference on Applied Artificial Intelligence (ICAPAI), pages 1–8, May 2021. arXiv: 2008.10781. URL: http://arxiv.org/abs/2008.10781 (visited on 2022-03-25), doi:10.1109/ICAPAI49758.2021.9462056

  2. Mael Guilleme, Veronique Masson, Laurence Roze, and Alexandre Termier. Agnostic Local Explanation for Time Series Classification. In 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), 432–439. Portland, OR, USA, November 2019. IEEE. 

  3. Eoin Delaney, Derek Greene, and Mark T. Keane. Instance-Based Counterfactual Explanations for Time Series Classification. In Antonio A. Sánchez-Ruiz and Michael W. Floyd, editors, Case-Based Reasoning Research and Development, volume 12877, pages 32–47. Springer International Publishing, Cham, 2021. 

  4. Omar Bahri, Soukaina Filali Boubrahimi, and Shah Muhammad Hamdi. Shapelet-based counterfactual explanations for multivariate time series. arXiv preprint arXiv:2208.10462, 2022. 

  5. Jacqueline Höllig, Cedric Kulbach, and Steffen Thoma. Tsevo: evolutionary counterfactual explanations for time series classification. In 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA), 29–36. IEEE, 2022. 

  6. Aya Abdelsalam Ismail, Mohamed Gunady, Héctor Corrada Bravo, and Soheil Feizi. Benchmarking Deep Learning Interpretability in Time Series Predictions. arXiv:2010.13924 [cs, stat], October 2020. arXiv: 2010.13924. 

  7. Raphael Meudec. Tf-explain. February 2021. URL: https://zenodo.org/record/5711704 (visited on 2022-05-24), doi:10.5281/ZENODO.5711704