
Interpretability of automated learning models
ASSOCIATED TO THE AREA OF EXCELLENCE
DATA
DATA
WHAT DO WE FOCUS ON?
Our research work helps explain the decisions made through automated learning models, for reputation reasons, increasingly for regulatory reasons, and also to make decision-making itself more objective. Our research work develops solutions to improve the evaluation metrics of interpretability models in terms of plausibility and fidelity. The design of robust substitution models against adversarial attacks. The generation of adversarial attacks and suitable counterfactuals, which are especially difficult for NLP applications, to eventually optimize the interpretability of AI algorithms. We have also developed a new, comprehensive and intuitive taxonomy of methods, which categorizes methods based on their goals, and allowing to suggest a standardized evaluation process.
RESEARCHER

Sara MEFAH, PhD
Obtaining an MSc in Artificial Intelligence from Paris Dauphine, and holder of a degree of Engineering in computer science from the Ecole Nationale Supérieure d’Informatique (Algeria), Sara obtained her PhD in Natural Language Processing from the Commissariat à l’Energie Atomique et aux énergies renouvelables (CEA) and Paris-Saclay University. Sara is now a Researcher at the Square Research Center and contributes in particular to the research work on the design and development of new interpretability and explainability methods for automated learning models. She takes part in the research activities of Lamsade (UMR 7243).
Sara’s work on Natural Language Processing has been published and presented at several national and international conferences.
Sara’s work on Natural Language Processing has been published and presented at several national and international conferences.
PUBLICATIONS
L’inférence causale, enjeu de la gestion de la crise sanitaire
Paru dans Alliancy
Intelligence Artificielle : comment anticiper les nouvelles réglementations européennes
Published in Maddyness