Quentin Petit
Currently in my 3rd year of doctoral studies at the University of Paris-Saclay, I'm looking for opportunities in industry to continue working on the computational distribution of very large Deep Learning models.
Currently in my 3rd year of doctoral studies at the University of Paris-Saclay, I'm looking for opportunities in industry to continue working on the computational distribution of very large Deep Learning models.
I worked on distributed and parallel multi-level computing for very large Deep Learning models. I specifically worked on the generalization of data embedding in the pre-processing of large models. The goal is to find the best way of representing data to retain as much information as possible, while reducing their size to save computing power during processing. Methods will be implemented in MindSpore, an AI framework.
I experimented existing linear algebra proposed by TensorFlow for sparse matrix computation. I analyzed how it is possible to use new sparse matrix compression formats using TensorFlow in order to minimize communications and optimize computation time.
To support a major relocation and with the aim of improving the well-being of employees in their workplace, I developed a new web application in Angular. I used the CakePHP library for the back-end part. This application allows all employees to listen to music. Their use then allows you to customize the music in the common rooms.
Lead graphical user interface Implementation for the SMG2S project (Mathematical research project). Website: https://smg2s.github.io/
High performance computing algorithm development
Parrallel and collectives communciations library: MPI, OpenMP, HCCL
Build documents and slides with LaTeX
AI Framework: Keras, TensorFlow, MindSpore
C2: Mother tongue
C1: Fluent
A1: Elementary
A1: Elementary
While studying mathematics and computer science, I became passionate about HPC and computations based on sparse linear algebra in a massively distributed environment.
During my thesis, I worked on the application of these principles to accelerate computations and took part in the generalization of various Deep Learning models.