// Work Experience

10 Years. Every Paradigm Shift.

From distributed AI research at LIP6/CNRS to enterprise platform engineering at EDF Group — each role systematically closing the gap between what AI research makes possible and what real teams can deliver, operate, and scale.

  • Oct2021 - Current

    EDF (via NeoStair EURL)

    Senior Solution Architect — AI & GenAI Platform

    A different problem at a different scale. The 2019–2021 platform created the infrastructure substrate. By 2021, the question shifted: how do you govern an AI capability that has become shared infrastructure for 10+ data science teams across an industrial group — spanning energy retail, grid operations, commercial analytics, and customer experience — while continuing to push the frontier toward Generative AI and autonomous agent systems?

    This is not a continuation of the previous role. It is a different problem: platform governance at enterprise scale, technology horizon management, and the translation of fast-moving AI research into production-viable architecture — while keeping 100+ daily jobs running at 99.9% uptime.

    Platform governance and engineering leadership. Defined and maintained the GenAI/ML platform roadmap for EDF Group. Led the Architecture Review Board: produced reusable reference blueprints — RAG pipeline patterns, agentic workflow contracts, model serving standards, feature store schemas — adopted across business units without mandating them. Quarterly technical strategy presented to C-level, translated into investment priorities and team roadmaps.

    The hardest governance problem was not technical standardization. It was managing the tension between platform stability (10+ teams on shared infrastructure) and technology velocity (LLM tooling evolving faster than any previous stack). Decision: stable infrastructure contracts at the platform layer, swappable tooling at the application layer. Teams adopted LangChain or LlamaIndex without the platform changing. The serving infrastructure, observability contracts, and deployment pipelines stayed consistent across all of them.

    GenAI and Agentic AI in production. Moved LLM integration beyond proof-of-concept into operational systems: RAG pipelines with evaluated chunking strategies and retrieval quality metrics; pgvector for semantic search at document corpus scale; LangGraph state machines for multi-step agentic workflows with human-in-the-loop checkpoints.

    Flagship delivery: AutoCons-Radar — an LLM-powered eligibility engine processing French public procurement data (BOAMP, TED/JOUE, DECP) to identify photovoltaic self-consumption projects 12–24 months before Enedis registration, enabling EDF commercial teams to anticipate market opportunities ahead of competitors.

    Led the Gemini Code Assist rollout across engineering teams: adoption strategy, guardrail configuration, cost optimization, and the organizational change management that determined whether the tool was actually used or became shelfware.

    LLMOps as a first-class discipline. Established LLMOps practices for production LLM systems: prompt versioning, evaluation frameworks (RAGAS + domain-specific evals), per-team cost monitoring, hallucination guardrails, trace logging for audit and debugging. The principle from 2019 carried forward: if you can’t observe it, you can’t operate it. Applied to LLM outputs, evaluation became an operational concern — not a pre-deployment checkpoint.

    MLOps at sustained scale.

    MetricResult
    Platform uptime SLA99.9% across 100+ daily training jobs
    Infrastructure cost−30% via FinOps on multi-M€ AWS bill
    Incident resolution<90 min median (from 4h at platform launch)
    Teams served10+ data science & product teams

    Stack — Python · LangChain · LangGraph · LlamaIndex · pgvector · Mistral · Gemini · Ollama · MLflow · Evidently · FastAPI · Kafka · Airflow · Kubernetes · OpenShift · AWS · Terraform · Prometheus · Grafana · ELK

  • Oct2019 - Sep2021

    EDF (via NeoStair EURL)

    Solution Architect — Cloud Native AI/ML Platform

    The transformation mandate. By 2019, EDF’s data division had analytical talent but no shared delivery infrastructure. The situation was textbook enterprise shadow IT: business-critical logic in Excel files, fragmented SaaS tools without integration, manual workflows consuming engineering capacity, no standard path from data science prototype to production system. A regulatory change created urgency — the organization needed to deliver compliant business applications at a pace the existing approach could not support.

    The decision: build a cloud-native internal development platform. Not another pipeline. Not another dashboard. A platform that would make all subsequent application delivery faster, more reliable, and auditable. I designed it, built it, and led its operationalization.

    What was replaced. Replaced a fragmented shadow IT ecosystem — Excel-based critical processes, isolated SaaS subscriptions, undocumented manual workflows — with a centralized, cloud-native foundation on OpenShift. The design principle: application teams should be able to ship a production-grade business application without rebuilding infrastructure from scratch each time.

    Every component was available as a composable, pre-configured building block: Kafka for event streaming and decoupled data flows, Airflow for auditable workflow orchestration, DBT for versioned data transformation with lineage, Redis for low-latency caching at the serving layer, FastAPI for standardized API contracts, Istio service mesh for hybrid on-prem/cloud inter-service communication. The portability constraint was non-negotiable — identical behavior on OpenShift (regulated, on-premise data) and AWS (cloud workloads).

    Scale and impact.

    BeforeAfter
    Delivery lead timeMonths of custom infra per appDays of configuration
    Lead time factor÷ 9
    Production applicationsAd-hoc20+ business apps
    Developers served20+ across business units
    Automated workflowsManual100+ jobs/day

    The lead time reduction is not a performance metric. It is an organizational transformation metric: it changed what EDF’s data teams could commit to business stakeholders, and what those stakeholders could reasonably expect from an AI initiative.

    What almost failed — and the lesson. The first version of the Architecture Review Board was too prescriptive. Teams perceived it as a blocker, not a service. Adoption stalled. The platform existed and wasn’t used.

    Fix: replaced mandatory standards with reference blueprints, replaced quarterly review gates with office hours and async feedback, made all tooling opt-in with documented upgrade paths. Governance as a service, not governance as a gate. Adoption recovered within two months. This lesson — governance without perceived value is indistinguishable from friction — carried through every subsequent platform generation.

    The MLOps layer. Delivered EDF’s first systematic MLOps stack: feature store, model registry (MLflow), automated retraining pipelines, CI/CD (GitLab / GitHub Actions), containerized model serving. Before: model deployment was a manual, undocumented process. After: trained model to production API in 5 days, with monitoring, rollback, and audit logging included.

    Stack — Python · FastAPI · Kubernetes · OpenShift · Docker · AWS · Terraform · Kafka · Airflow · DBT · Redis · MLflow · Istio · GitLab CI · Spark

  • Oct2017 - Sep2019

    EDF

    Big Data Engineer — Data Platform & Decision Systems

    Platform engineering, not pipeline development. The strategic choice made early was to build reusable platform components rather than one-off pipelines. This is the difference between a data team that grows linearly with headcount and one that scales by design. The goal: establish shared infrastructure that every team could adopt without rebuilding the same foundations.

    Delivered centralized logging, supervision, and archival modules adopted across 10+ production pipelines. Designed the data quality framework that enforced schema contracts at ingestion — making violations visible at the source rather than at consumption, weeks later. The cultural argument was as important as the technical one: adoption required demonstrating that the platform reduced each team’s operational burden, not added governance overhead.

    Near real-time Customer 360. The flagship delivery: a near real-time Customer 360 data platform consolidating customer data from multiple heterogeneous source systems into a unified, queryable layer serving EDF’s retail decision-support systems.

    Architecture: Apache Spark for distributed processing and transformation, HBase and Apache Phoenix for low-latency key-value serving, Elasticsearch for full-text search and analytical queries. Data updated in near real-time. The binding constraint: Oracle CRM, billing systems, and meter data management platforms could not be modified or replaced. The platform had to consume their outputs as-is, resolve inconsistencies downstream, and present a coherent customer model to all consumers. Schema resolution and data quality enforcement lived in the transformation layer — not at the source.

    Observability as a founding principle. Built a centralized pipeline monitoring platform on Elasticsearch and Kibana — EDF’s first systematic operational visibility layer for data infrastructure. Every pipeline exposed health metrics, SLA tracking, and anomaly detection from day one. This principle carried through every subsequent platform generation built on top of this foundation.

    Stack — Apache Spark · Hadoop · HDFS · Hive · HBase · Apache Phoenix · Elasticsearch · Kibana · Python · SQL

  • Feb2017 - Aug2017

    Sanofi

    Machine Learning Engineer — Supply Chain Forecasting

    The constraint that shaped everything. Pharmaceutical forecasting is not a KPI problem — it is a patient safety problem. A stock-out in a pharmaceutical distribution network has a healthcare consequence, not just a financial one. This changes everything: the tolerance for false precision, the documentation requirements, and the adoption process for any model that touches an operational system.

    The problem. Sanofi’s distribution centers were running on forecasting models that hadn’t been revisited in years. Consumption volatility — seasonal epidemics, promotional dynamics, supply disruptions — had outpaced the models’ assumptions. The result: chronic inventory imbalance across the portfolio. Overstocked on stable lines. Understocked on volatile ones. The mandate: rebuild the forecasting layer and demonstrate measurable stock reduction without degrading service levels.

    What was built. Designed and implemented ML demand forecasting models in Python (scikit-learn) with feature engineering informed by domain expertise: seasonal decomposition, promotional calendar flags, lead time distributions, substitution product clustering. ETL pipeline consuming SQL Server and SAP Business Objects, producing outputs directly compatible with the existing planning system — a non-negotiable integration constraint that shaped the full output schema.

    The deployment model was incremental by design. In a GxP-adjacent environment, a model cannot replace a planning process overnight. Ran a three-month parallel execution alongside the legacy forecast per product family, building operations team trust category by category before full handover.

    Results. 8% stock reduction across the distribution network without increasing service failure rates — measured over the parallel run period. The real lesson: in regulated environments, deployment is as much organizational change management as it is engineering. The model’s technical quality was necessary. The adoption process was the actual deliverable.

    Stack — Python · scikit-learn · SQL Server · SAP Business Objects · QlikView

  • Nov2015 - Jul2016

    LIP6 — CNRS Research Laboratory

    Research Engineer — Distributed Multi-Agent Systems

    The problem classical optimization couldn’t solve. Supply chain coordination assumes a central planner with full information. Real multi-echelon supply chains violate this assumption structurally: producers and suppliers are autonomous, self-interested actors with private cost functions they will not disclose. Centralized optimization either fails or requires information transparency that no rational economic actor would accept.

    What was built. Designed and implemented a distributed multi-agent negotiation system in which producer and supplier agents reach near-optimal collective outcomes through structured bargaining — without a central coordinator, without full information disclosure. Each agent operates on private state only, communicates through typed message schemas, and updates its strategy based on observed outcomes rather than disclosed cost functions.

    The system implements Nash bargaining mechanisms for bilateral negotiation, revenue-sharing contracts for coalition stability under incomplete information, and belief-revision strategies that allow agents to refine their world model across negotiation rounds. Architecture modeled with AUML and Gaia methodology, implemented in JADE (Java Agent Development Framework). Validated against standard Bullwhip Effect benchmark scenarios — emergent equilibria reached in topologies where centralized planning would require full information transparency from all parties.

    Why this is a 2026 architecture problem. The coordination primitives designed here — private agent state, goal-directed behavior, typed communication protocols, convergence across rounds — are what LangGraph, AutoGen, and CrewAI implement today under the names tool use, memory, and multi-agent orchestration.

    The questions formalized in 2016 remain the right evaluation framework for agentic systems: What is the communication protocol? How is partial state handled across agent boundaries? What are the failure modes when an agent receives contradictory signals? How does the system recover from a mid-execution fault? These questions didn’t change. Only the compute did.

    Read the full thesis →

    Stack — Java · JADE · Nash Bargaining · Coalition Formation · AUML · Gaia Methodology