One of the most important obstacles to the deployment of machine learning technologies is robustness and safety in a critical context. Indeed, it is often reasonably easy to fool a deep learning model and make it mispredict. This is for instance unacceptable about systems critical for human safety. Another obstacle is about explaining deep learning models predictions. Such explanations are important for human decision makers to understand and accept the machine learning predictions in order to fully benefit of the huge AI potential.
We are seeking a talented candidate with a background in applied mathematics to investigate the following research topics at the intersection of AI and formal methods:
- Explainable AI,
- Robustness of machine learning models,
- Interpretability of machine learning models.
The PhD will be done in collaboration with the Research Chair on Deep Learner Explanation and Verification, of the Institute on Artificial and Natural Intelligence (ANITI) and the University of Toulouse. The PhD duration is 36 months and the net salary is 2096€ per month with some teaching (64 hours per year on average).
- MSc/Engineering degree in applied mathematics. A strong knowledge in mathematical modeling is required,
- Experience in formal methods, data analysis, scientific computing, optimization, control theory, inverse problems or algorithmic differentiation would be appreciated,
- Programming skills in C++ or Python/Matlab are needed,
- Creativity, exceptional critical thinking, and capable to adapt to face new challenges.
Send your application to: firstname.lastname@example.org