• EN
  • FR
Site carrière CEA : toutes nos offres d'emploi
CEA

Suivez nous

  •  

  • Accueil
  • Déposer une candidature spontanée
  • Ma recherche, mon alerte
  • CDI/CDD pour alternants/stag. CEA
  • Consulter nos sujets de Thèses
  • Un souci ? Contactez-nous
 

Connexion Espace candidat

J'ai déjà un espace candidat

Connexion à l'espace candidat




Mot de passe perdu

S'inscrire Je me crée un espace candidat

Vous n'avez pas encore votre propre espace candidat. Créez-le en cliquant ici.
Un souci ? Contactez-nous à
admin-poem@cea.fr

 

Vous êtes ici :  Accueil  ›  Liste des offres  ›  Détail de l'offre

Ma sélection : 0 offre(s)
Site carrière CEA : toutes nos offres d'emploi
CEA

Suivez nous

  •  

Menu Site carrière CEA

  • Accueil
  • Déposer une candidature spontanée
  • Ma recherche, mon alerte
  • CDI/CDD pour alternants/stag. CEA
  • Consulter nos sujets de Thèses
  • Un souci ? Contactez-nous
Pause
Lecture
Moteur de recherche d'offres d'emploi CEA
Voir toutes les offres
Flux RSS et autres flux
Information

Backdoor Attack Scalability and Defense Evaluation in Large Language Models H/F

  • Envoyer cette offre à un ami
  • Imprimer cette offre (nouvelle fenêtre)
  •  


Vacancy details

General information

CEA (logo)

Organisation

The French Alternative Energies and Atomic Energy Commission (CEA) is a key player in research, development and innovation in four main areas :
• defence and security,
• nuclear energy (fission and fusion),
• technological research for industry,
• fundamental research in the physical sciences and life sciences.

Drawing on its widely acknowledged expertise, and thanks to its 16000 technicians, engineers, researchers and staff, the CEA actively participates in collaborative projects with a large number of academic and industrial partners.

The CEA is established in ten centers spread throughout France
  

Reference

2025-37960  

Position description

Category

Mathematics, information, scientific, software

Contract

Internship

Job title

Backdoor Attack Scalability and Defense Evaluation in Large Language Models H/F

Subject

Large Language Models (LLMs) deployed in safety-critical domains are increasingly vulnerable to backdoor and data poisoning attacks. Recent studies show that even a small number of poisoned samples can compromise models at massive scales, highlighting urgent security challenges. This internship focuses on empirically testing and advancing poisoning attacks and defenses in LLMs through systematic experimentation and adversarial evaluation. Tasks include implementing state-of-the-art attack methods (e.g., jailbreaks, denial-of-service, data extraction), evaluating defenses, analyzing attack scalability across model sizes, and establishing standardized evaluation metrics such as Attack Success Rate and Clean Accuracy to support reproducible benchmarking and robust model defense strategies.

Contract duration (months)

6

Job description

Context: Large Language Models (LLMs) deployed in safety-critical domains face significant threats from backdoor attacks. Recent empirical evidence contradicts previous assumptions about attack scalability: poisoning attacks remain effective regardless of model or dataset size, requiring as few as 250 poisoned documents to compromise models from up to 13B parameters. This suggests data poisoning becomes easier, not harder, as systems scale.

Backdoors persist through post-training alignment techniques like Supervised Fine-Tuning and Reinforcement Learning from Human Feedback, compromising current defenses. However, persistence depends critically on poisoning timing and backdoor characteristics. Current verification methods are computationally prohibitive—Proof-of-Learning requires full model retraining and complete training transcript access. While step-wise verification shows promise for runtime detection, scalability to production models and resilience against adaptive adversaries remain unresolved.

Existing defenses focus on post-training detection rather than preventing attack success during training. Advancing data poisoning scaling dynamics—understanding how attack success correlates with dataset composition, poisoning density, and model capacity—is essential for developing evidence-based threat models and defense strategies.

Objective: This internship aims to empirically test and advance data poisoning attacks and defenses for LLMs through systematic experimentation and adversarial evaluation. Key responsibilities include: implementing state-of-the-art attack methods across multiple vectors (jailbreaking, targeted refusal, denial-of-service, information extraction); testing attacks on diverse model architectures and scales; establishing standardized evaluation protocols with metrics such as Attack Success Rate and Clean Accuracy; evaluating existing defenses, particularly step-wise verification; and developing reproducible test suites for objective defense benchmarking.

Applicant Profile

Requirements:

  1. Background in computer science or a related field, with a focus on machine learning security, or adversarial machine learning.
  2. Strong programming skills in languages commonly used for machine learning tasks (e.g., Python, C++).
  3. Experience with machine learning systems, model training, or adversarial robustness is a plus.
  4. Ability to work independently and collaborate in a research-driven environment.
  5. Comfortable working in English, essential for documentation purposes.

Position location

Site

DAM Île-de-France

Job location

France, Ile-de-France, Essonne (91)

Location

Gif-sur-Yvette

Candidate criteria

Languages

English (Fluent)

Prepared diploma

Bac+5 - Master 2

Recommended training

Computer Science

PhD opportunity

Oui

Requester

Position start date

27/10/2025


Autres offres

Ces offres pourraient vous intéresser

Caractérisation de suspensions de bactériophages par imagerie de fluctuations H/F

Ajouter cette offre à ma sélection : Caractérisation de suspensions de bactériophages par imagerie de fluctuations H/F (2025-37716)
  • Réf. : 2025-37716
  • Stage
  • Isère (38)
  • Grenoble

Développement de nouveau scintillateur pour l'imagerie rayons X de haute cadence H/F

Ajouter cette offre à ma sélection : Développement de nouveau scintillateur pour l'imagerie rayons X de haute cadence H/F (2025-37611)
  • Réf. : 2025-37611
  • Stage
  • Essonne (91)
  • Saclay

Internship (6 months) in Electromagnetics/AI H/F

Ajouter cette offre à ma sélection : Internship (6 months) in Electromagnetics/AI H/F (2025-38037)
  • Réf. : 2025-38037
  • Stage
  • Grenoble
  • Mentions légales
  • Cookies
  • Paramétrer vos cookies
  • Accessibilité : partiellement conforme
  • Plan du site
Aller en haut