Select Page

Explicability of ‘black box’ ai* models 

An R&D programme carried out in partnership with Paris Dauphine University.

Context

Artificial intelligence models are used in a number of fields (e.g. granting credit, fraud detection) and they deliver very good results. However, they lack transparency and explicability, which hinders their adoption by business teams and management, as well as their use in the eyes of regulators.

Why?

The programme aims to improve methods and tools to explain the results of artificial intelligence models (without going into their internal ‘black box’ workings) and justify or not the relevance of their predictions.

How?

The research programme enables better interpretation of the models and focuses on analysing their flaws and detecting their internal biases, in order to identify areas for improvement where necessary. The work focuses on comparing existing methods and improving the metrics used to compare explanations. 

Who?

Companies in all sectors.

RESEARCHER

Doctor Sara MEFTAH (Researcher (PhD) - Consultant)

Doctor Sara MEFTAH (Researcher (PhD) — Consultant)

With a Master’s 2 in Artificial Intelligence (Paris-Dauphine PSL) and an engineering degree in Computer Science, Sara obtained her doctorate in Natural Language Processing at the Commissariat à l’Énergie Atomique et aux énergies renouvelables (CEA) and the University of Paris Saclay. Sara is a researcher at the Square Research Center and contributes in particular to research on the design and development of new methods for the interpretability and explicability of machine learning models.

Publication
  • “Intelligence Artificielle : comment anticiper les nouvelles réglementations européennes”, Maddyness — 25/10/2021

OTHER SQUARE RESEARCH CENTER PROGRAMS

Share This