FN Archimer Export Format PT J TI Homography-based loss function for camera pose regression BT AF Boittiaux, Clementin Marxer, Ricard Dune, Claire Arnaubec, Aurelien Hugel, Vincent AS 1:1,2,3;2:2;3:3;4:1;5:3; FF 1:PDG-DFO-SM-PRAO;2:;3:;4:PDG-DFO-SM-PRAO;5:; C1 Ifremer, 83000 La Seyne-sur-Mer, France CNRS, LIS, Université de Toulon, Aix Marseille Université, 83000 Toulon, France COSMER, Université de Toulon, 83000 Toulon, France C2 IFREMER, FRANCE CNRS, FRANCE UNIV TOULON, FRANCE SI TOULON SE PDG-DFO-SM-PRAO IN WOS Ifremer UPR copubli-france copubli-univ-france IF 5.2 TC 3 UR https://archimer.ifremer.fr/doc/00766/87810/93679.pdf LA English DT Article DE ;Localization;deep learning for visual perception AB Some recent visual-based relocalization algorithms rely on deep-learning methods to perform camera pose regression from image data. This paper focuses on the loss functions that embed the error between two poses to perform deep-learning based camera pose regression. Existing loss functions are either difficult-to-tune multi-objective functions or present unstable reprojection errors that rely on ground-truth 3D scene points and require a two-step training. To deal with these issues, we introduce a novel loss function which is based on a multiplane homography integration. This new function does not require prior initialization and only depends on physically interpretable hyperparameters. Furthermore, the experiments carried out on well established relocalization datasets show that it minimizes best the mean square reprojection error during training when compared with existing loss functions. PY 2022 PD JUN SO Ieee Robotics And Automation Letters SN 2377-3766 PU Institute of Electrical and Electronics Engineers (IEEE) VL 7 IS 3 UT 000788948000002 BP 6242 EP 6249 DI 10.1109/LRA.2022.3168329 ID 87810 ER EF