Pause
Lecture
Moteur de recherche d'offres d'emploi CEA

Formal Explanations for Trustworthy Artificial Intelligence H/F


Détail de l'offre

Informations générales

Entité de rattachement

Le CEA est un acteur majeur de la recherche, au service des citoyens, de l'économie et de l'Etat.

Il apporte des solutions concrètes à leurs besoins dans quatre domaines principaux : transition énergétique, transition numérique, technologies pour la médecine du futur, défense et sécurité sur un socle de recherche fondamentale. Le CEA s'engage depuis plus de 75 ans au service de la souveraineté scientifique, technologique et industrielle de la France et de l'Europe pour un présent et un avenir mieux maîtrisés et plus sûrs.

Implanté au cœur des territoires équipés de très grandes infrastructures de recherche, le CEA dispose d'un large éventail de partenaires académiques et industriels en France, en Europe et à l'international.

Les 20 000 collaboratrices et collaborateurs du CEA partagent trois valeurs fondamentales :

• La conscience des responsabilités
• La coopération
• La curiosité
  

Référence

2023-29214  

Description de l'unité

The French Alternative Energies and Atomic Energy Commission (CEA) is a
key player in research, development, and innovation. Drawing on the widely
acknowledged expertise gained by its 16,000 staff spanned over 9 research centers
with a budget of 4.1 billion Euros, CEA actively participates in more than
400 European collaborative projects with numerous academic (notably as a
member of Paris-Saclay University) and industrial partners. Within the CEA
Technological Research Division, the CEA List institute addresses the challenges
coming from smart digital systems.
Among other activities, CEA List's Software Safety and Security Laboratory
(LSL) research teams design and implement automated analysis in order to make
software systems more trustworthy, to exhaustively detect their vulnerabilities, to
guarantee conformity to their specifications, and to accelerate their certification.
The lab recently extended its activities on the topic of AI trustworthiness
and gave birth to a new research group: AISER (Artificial Intelligence Safety,
Explainability and Robustness).

Description du poste

Domaine

Mathématiques, information  scientifique, logiciel

Contrat

Stage

Intitulé de l'offre

Formal Explanations for Trustworthy Artificial Intelligence H/F

Sujet de stage

Systems incorporating artificial intelligence (AI) components have a considerable
influence on society and the physical infrastructures on which they are based.
The role played by these systems means that there is an unprecedented need
for audit and trust. However, the scale of the data processed by these systems
(sensory and temporal data) means that there is an unprecedented need for
audit and trust. However, the scale of the data processed by these systems
(sensory and temporal data) and the computer complexity of these programmes
complicate their analysis. In addition, existing explicability techniques are
1difficult to compare and their unreliability limits their applicability to realistic
systems.

Durée du contrat (en mois)

4 to 6 month

Description de l'offre

The aim of this internship is to explore the scalability of formal explainable AI
techniques. In particular, the internship’s goal is to identify the limits of existing
techniques and propose new approaches, for instance inspired by (Bassan and
Katz 2023). To do so, the intern will leverage two tools developped by the
team: the CAISAR platform for formulating verification queries, and the PyRAT
analyser to solve them.
The broad internship goals are:
• familiarization with the state-of-the-art on explainable AI ((Molnar 2022))
• implementation of contrastive explanation methods in the CAISAR plat-
form
• benchmark the solution on deep neural networks on selected datasets
• if time allows, use the PyRAT analyzer to formulate a validity bound for
overapproximation-based explanations

Moyens / Méthodes / Logiciels

Explainability, Neural Network, Why3, CAISAR, PyRAT

Profil du candidat

The candidate will work at the crossroads of formal verification and artificial
intelligence. As it is not realistic to be expert in both fields, we encourage candi-
dates that do not meet the full qualification requirements to apply nonetheless.
We strive to provide an inclusive and enjoyable workplace. We are aware of
discriminations based on gender (especially prevalent on our fields), race or
disability, we are doing our best to fight them.
• Minimal
– Master student or equivalent (2nd/3rd engineering school year) in
computer science
– knowledge of OCaml
– ability to work in a team, some knowledge of version control
• Preferred
– notions of AI and neural networks
– knowledge of formal verification in general, of SMT solving in partic-
ular
– knowledge of Why3

Localisation du poste

Site

Saclay

Localisation du poste

France

Ville

Orsay

Critères candidat

Langues

  • Français (Courant)
  • Anglais (Courant)

Diplôme préparé

Bac+5 - Master 2

Formation recommandée

Informatique, Mathématique

Possibilité de poursuite en thèse

Oui