Université Clermont Auvergne

PhD Research Proposal « MMW radar and optical camera fusion architecture for mobile robot perception in poorly structured environments and adverse weather conditions. »

2024-05-15 (Europe/Paris)
Save job

Supervisors, location and funding

  • Laboratory: Université Clermont-Auvergne, Institut Pascal (UMR 6602 CNRS/UCA/SIGMA), ISPR research group (Images, Perception Systems and Robotics).

📍 Campus Universitaire des Cézeaux, 4 Avenue Blaise Pascal, 63178 Aubière, France

  • Director and co-supervisors:
    • Omar Ait-Aider (Institut Pascal, Université Clermont Auvergne - MCF - HDR),
    • Mathieu Labussière (InstitutPascal, Université ClermontAuvergne - MCF),
    • Pierre Duthon (Cerema - Ingénieur-Chercheur)
    • Giulio Reina (Robotic Mobility Lab of Politecnico di Bari, Italy - Full Professor)
  • Partnerships:
    • Cerema (Équipe RechercheSTI)
    • INRAé (Équipe ROMEA, Unité TSCF)
    • Robotic Mobility Lab of Politecnico di Bari
  • Funding: The position is fully funded by the International Research Center (CIR) "Innovation Transportation and Production Systems" (ITPS) of the I-SITE CAP 20-25, for a period of 36 months, starting preferably by September/October 2024. Monthly income is approximately 1,600€ (after taxes). 

Scientific field, context and objective

The work proposed in this thesis focuses on the fusion of data from a mmW (millimeter wave) radar sensor and an RGB optical camera for the perception of mobile robots in poorly structured environments and diffcult weather conditions (dust, rain, fog and snow).

Efficient and safe autonomous navigation requires an accurate representation of the 3D environment surrounding the vehicle at all times. To this end, the fusion of multi-modal data from LiDAR and cameras is a well-studied method in the state of the art. However, it has limitations under difficult perception conditions [1]. The aim of this thesis is to develop a method providing accurate 3D perception as well as semantic and panoptic segmentation of the environment, even in conditions unfavorable to LiDAR, by exploiting mmW camera/radar fusion. 

The work will consist in developing a neural network architecture capable of learning to segment a road scene into semantically reliable and geometrically precise instances from a pair of RGB images and radar scans. The initial aim is to improve object resolution, and to determine their position and kinematics robustly and accurately. The difficulty lies in matching data from very different sensors [2]. Indeed, the primitives extracted from each of the sensors cannot be easily superimposed due to the difference in angle of view (frontal for the camera and bird-eye-view for the radar) and modality (appearance for the camera and distance for the radar).

The idea is to exploit the complementary nature of these two sensors by combining the advantages of the information-rich (semantic and geometric) RGB modality with the radar's ability to perceive more information at greater range, particularly when there are occultations. Classically, state-of-the-art work focuses on the use of the radar modality to improve detection and/or segmentation in RGB images [3]. Few works focus on the use of RGB information to aid radar scan segmentation. Processing will also be based on the precise calibration of the camera/radar system, thanks to the work previously carried out in the lab [4,5]. 

In addition, to achieve these objectives, it is essential to create a consistent database that is representative of the variability of environments and conditions in which these algorithms will operate. This task will be supported by the EquipEx ROBOTEX and EquipEx+ TIRREX platforms available at the Institut Pascal laboratory (in particular PAVIN, EZ10 and ZOE), as well as a mmW 360° Radar.

Furthermore, we will focus on characterizing and evaluating the performance of the radar-camera fusion architecture in degraded weather conditions, thanks in particular to the support of CEREMA's PAVIN Brouillard & Pluie (PAVIN BP) platform. The aim is to be able to characterize and evaluate the detection and semantic transfer capabilities in radar space of the various elements of a scene in a controlled environment, i.e., known and characterized scenes (precise object resolution, precise position and kinematics) with variable weather conditions. This will enable us to use these data as ground truth to validate the architecture to be developed. Finally, a comparison with the sensors traditionally found on autonomous vehicles (lidar and camera) will be set up, to highlight the advantages of the radar sensor combined with the camera. 

The use of radar will enhance the capabilities of these existing sensors as part of a varied "sensor suite" for vehicles. Integrating information from several types of sensor will overcome the weaknesses of individual approaches, ensure redundancy and, ultimately, make vehicles safer and increasingly autonomous. 

Information and contact

Start date, duration and work environment. For a period of 36 months, starting preferably by September/October 2024. Possibility of working from home might be considered but is not recommended. Typical work-week schedule is from Monday to Friday, 9 am-5:30 pm, approximately 35 hours.

Required skills. M.Sc. student (or Diplôme d’Ingénieur) in Robotics / Automatics / Mechatronics / Electronics / Signal Processing / Computer Vision. Strong background in Robotics and Computer Vision, and curiosity to work with sensors in experimental situations. Good experience with machine (deep) learning would be appreciated. Strong skills in programming would be an advantage (e.g., python, C++). A good level of English is required (Level B2+ or C1) along with strong communication skills. A minimal level of French would be helpful. 

Contact. If you are interested, please send a CV, a motivation letter, master's grades and ranking, and recommendation letter(s) to Mathieu Labussiere by email at mathieu.labussiere@uca.fr (and/or to Omar Ait-Aider, at omar.ait-aider@uca.fr; and/or to Pierre Duthon, at pierre.duthon@cerema.fr). Audition will be organized upon acceptance of the application. 

Deadline. Application: 15/05/2024; Audition(s): 30/06/2024.

Bibliography.

  1. A. S. Mohammed, et al., "The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review," Sensors (2020)
  2. A. Prakash, K. Chitta and A. Geiger, "Multi-Modal Fusion Transformer for End-to-End Autonomous Driving,", CVPR (2021)
  3. A. Srivastav and S. Mandal, "Radars for Autonomous Driving: A Review of Deep Learning Methods and Challenges", IEEE Access, Volume: 11 (2023)
  4. G. El Natour, O. Ait-Aider, et al., "Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor ", Sensors (2015)
  5. G. El Natour, O. Ait-Aider, et al., "Radar and vision sensors calibration for outdoor3D reconstruction ", ICRA (2015)

Apply now

Fill out the form below to apply for this position.
Allowed file types: PDF, DOC, DOCX, TXT, RTF
Allowed file types: PDF, DOC, DOCX, TXT, RTF
Allowed file types: PDF, DOC, DOCX, TXT, RTF
Allowed file types: PDF, DOC, DOCX, TXT, RTF

*By applying for a job listed on Academic Positions you agree to our terms and conditions and privacy policy.

Furthermore by submitting this application, you consent to us retaining your personal data for up to 3 months for service-related purposes. We prioritize your privacy and will handle your information securely.

Job details

Title
PhD Research Proposal « MMW radar and optical camera fusion architecture for mobile robot perception in poorly structured environments and adverse weather conditions. »
Location
49, bd François-Mitterrand CS 60032 Clermont-Ferrand, France
Published
2024-01-22
Application deadline
2024-05-15 23:59 (Europe/Paris)
2024-05-15 23:59 (CET)
Job type
PhD
Save job

About the employer

While deliberately participating in national and international networks for the production and spread of knowledge, the University also contributes...

Visit the employer page

This might interest you

...
Deciphering the Gut’s Clues to Our Health University of Turku 5 min read
...
Understanding Users to Optimise 3D Experiences Centrum Wiskunde & Informatica (CWI) 5 min read
...
Control Systems: The Key to Our Automated Future? Max Planck Institute for Software Systems (MPI-SWS) 5 min read
More stories