Select Page
Causal inference and improvement of customer knowledge models (Uplift)

Interpretability of automated learning models

Our research work is focusing on developing solutions to improve the evaluation metrics of interpretability models in terms of plausibility and fidelity. The design of robust substitution models against adversarial attacks. The generation of adversarial attacks and suitable counterfactuals, which are especially difficult for NLP applications, to eventually optimize the interpretability of AI algorithms.