Skip to content

This repository contains a public summary the core research, experiments, and results from my PhD thesis:

Notifications You must be signed in to change notification settings

jjeremy40/PhD_work_public

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

🧠 PhD_work: Performance Guarantees for AI Systems in Image Compression for Autonomous Vehicles

This repository contains a public summary of the core research, experiments, and results from my PhD thesis (🔒 Full research materials are available upon request. Contact me at jeremyjaspar2@gmail.com or on LinkedIn.)

"Methods for Obtaining Performance Guarantees from an Artificial Intelligence System Applied to Image Compression for the Design of Autonomous Vehicles"

Conducted at Stellantis and Université Sorbonne Paris Nord (L2TI), this work bridges theoretical methods of verification and large-scale empirical validation to build trust in deep learning systems for safety-critical applications.


🚗 Application: AI-Based Field Monitoring

Field Monitoring is an embedded system designed to monitor the behavior of autonomous driving AI after deployment. It is triggered by critical events (e.g., emergency braking, airbag deployment) and captures short image sequences for later analysis.

Its goal is to detect, understand, and correct failures that were not anticipated during design and testing. To reduce data transmission costs, captured images are compressed using neural network-based methods, which require AI to verify that the compression does not harm perception quality. This ensures that safety-critical events remain analyzable and actionable.

Field Monitoring Schema


📚 State of the Art in AI Guarantees

Two complementary families of methods are at the core of AI performance guarantees:

  • Formal Verification: Symbolic interval propagation, MILP-based analysis, convex relaxations, and other methods provide mathematical guarantees for specific input regions or network properties.
  • Statistical Certification: Techniques like conformal prediction and PAC-style bounds use empirical evaluation on large datasets to estimate confidence intervals and generalization guarantees.

While both approaches offer valuable insights, they suffer from key limitations: lack of scalability, heavy computational requirements, strong statistical assumptions, and in some cases, the need to modify the original models. These constraints pose significant challenges for real-world deployment in complex, safety-critical systems.


📊 Strategy for Statistical Guarantees

Our approach builds on a large-scale empirical evaluation framework, relying on:

  • Multiple datasets: Real (e.g., Waymo, BDD100K, ...), simulated, and generated images.
  • Decomposition of expected performance via the law of total expectation, allowing us to analyze specific subpopulations of interest and manually assign their weights based on prior knowledge or expert-defined frequency estimates.
  • Bootstrap resampling for computing confidence intervals, ensuring robust and statistically meaningful estimates.

This methodology allows for performance estimation with quantifiable uncertainty, paving the way for field-aware AI monitoring systems.

Strategy for Statistical Guarantees


🧨 Generating Critical Examples with Diffusion Models

To proactively test the AI system, we generate challenging inputs using diffusion models (e.g., stable diffusion, latent diffusion) using the following process :

Critical image generation.

These images are guided towards the direction of maximal prediction error, using gradients of the loss with respect to the input. The goal is to:

  • Explore failure modes,
  • Expose blind spots in the training distribution,
  • Stress-test model robustness under compression.

This synthesis procedure helps anticipate weaknesses before deployment.

To stress-test perception models, we use diffusion-based generative models to synthesize images that are likely to cause mispredictions. These inputs are guided by the gradient of the model's error, creating examples that expose potential blind spots in the system :

Diffusion-generated critical image example (increasing MSE)

Diffusion-generated critical image example (increasing bpp)


🎓 Thesis Defense & Committee

The thesis was defended in March 2025.

Jury Members:

  • President: BEGHDADI, Azeddine, USPN L2TI
  • Reviewers: ROUMY, Aline, INRIA
  • Reviewers: ZAHARIA, Titus, Télécom SudParis
  • Examiners: FOGELMAN-SOULIÉ, Françoise, Hub France IA
  • Examiners: CHAARI Lotfi, Toulouse INP
  • Advisors: VIENNET Emmanuel, USPN L2TI
  • Advisors: GUALANDRIS David, Stellantis

🤝 Acknowledgements

This work was supported by Stellantis and L2TI Lab, Université Sorbonne Paris Nord. Many thanks to all collaborators and reviewers for their insights and guidance.


📬 Contact

Feel free to reach out via [jeremyjaspar2@gmail.com] for questions or collaboration opportunities.


"Trust in AI is not only about how well it performs, but how well we understand when it might fail."

About

This repository contains a public summary the core research, experiments, and results from my PhD thesis:

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published