Work
  • Jan2020 - Current
    EDF - Solutions Architect
    Analytics Team

    In a complex and dynamic energy market, business units require customized and agile Software as a Service (SaaS) tools with a very short time to market. Current solutions lack standardization and scalability, not native for innovations like artificial intelligence, and are further complicated by shadow IT.

    Mission :

    To address these challenges, our cross-functional team serves as a bridge between various stakeholders, including EDF IT services providers, business units, the open-source community, and new technologies. Our strategy is to adopt a cloud-native approach, capitalizing on EDF’s partnership with Red Hat to leverage their OpenShift platform 🔗.

    Accomplishments :

    • DevSecOps: Implement Infrastructure as Code (IaC) with Terraform, CI/CD with Helm charts, GitLab and Sonar/checkmarks.
    • Build boilerplate following the 12-factor methodology 🔗 (web apps, APIs, and jobs)
    • Implemented an observability stack with ELK, Prometheus, and Grafana.
    • Design and Build data platform using Airflow for orchestration, Redis for caching, and Kafka for streaming.
    • Make cutting-edge technologies accessible, such as artificial intelligence, by building MLOps pipelines and planning to incorporate generative AI to assist business units and development teams.

    Result :

    Our platform now hosts over 50 applications (web apps, jobs, APIs), with dev squads delivering agile and secure solutions to business units (Time to market divided by 9).

    My Day to day tasks :

    • Translate business requirements into secure, scalable, and reliable solution designs.
    • Regularly develop proof of concepts and pilots, implement projects, and communicate new features and benefits to stakeholders.
    • Document and share best practice knowledge for new solutions.
    • Lead architectural design sessions and provide technical guidance throughout the project lifecycle.
  • Oct2017 - Jan2020
    EDF - Software Engineer
    Big Data Team

    Part of big data team, responsable of HADOOP Ecosystem.

    Mission : My role involves designing and implementing big data pipelines to support real-time analytics and business intelligence initiatives.

    Accomplishments :

    • 360° Customer view : Create a coherent and up-to-date view of customer experience. Extract and transform data using Apache Spark, load into Apache Phoenix, a high-performance relational database layer over HBase for low-latency queries.

    • Monitoring Application : Centralize logs of HADOOP processes into ElasticSearch (Build a module on top of log4j to integrate in oozie workflows), build a dashboard with KPIs (Kibana). Provided stakeholders with actionable insights for operational decision-making.

    My Day to day tasks :

    • Collaborate with cross-functional teams to define data requirements and architecture.
    • Design and develop data pipelines (ELT) from HDFS to HIVE, HBase, Elastic using Apache Spark to transform data.
    • Monitore data pipelines and manage incidents to ensure data integrity and availability.
  • Feb2017 - Sep2017
    SANOFI - Software Engineer
    Supply chain Team

    Improve Sanofi primary distribution center performance (working capital requirement WCR) with a focus on inventory levels and delivery lead times.

    • Build ETL data pipelines.
    • Reporting system development (KPI : QlikView, SAP Business Objects).
    • Demand forecasting : Build Machine learning model, Multivariate time series (Gradient Boosting - Python).
  • Nov2015 - Jul2016
    LIP6 - Research Engineer
    Multi agent systems Team

    Build Intelligent procurement model : Use Distributed Artificial Intelligence (Multi-Agent Systems), to address limitations (Bullwhip Effect) of traditional Supply Chain Coordination approaches (VMI, CPFR).

    • Multi agents system modeling (AUML, Gaia).
    • Algorithm : bargaining, Buyback and Revenue sharing combination (JADE : Java Agent DEvelopment Framework).