Rotation matrices are a convenient and intuitive way to describe algebraically the relative orientation of multiple cameras or of the same camera shooting from different points of view. However, the definition of a rotation matrix is prone to intrinsic ambiguity, which often leads to a mismatch with the physical rotation one wants to describe, even if the definition is mathematically correct. This is a common source of errors whenever it is required to compute a rotation matrix from camera orientation data, or vice versa, to recover such data from a given rotation matrix. This tutorial aims to describe and solve the main factors that generate the ambiguity in using rotation matrices and to permit dealing with them properly both in theory and in practice. Through a detailed analysis of these factors, which ranges from basic mathematical aspects to the notation used to refer to them, it is shown how to avoid errors in the algebraic description of the relative orientation of different cameras by means of rotation matrices. This work is followed by another contribution, in which the interaction between rotation matrices and translation vectors (used to describe the shifts between pairs of cameras) is also analyzed, and a recommendation on how to define a common reference system coherent with a camera (a crucial aspect to model the camera acquisition geometry) is given. The two contributions jointly embrace the entire description of the relative acquisition geometry of images taken from different points of view and provide a complete and error-free methodology to recover it or to extract useful data from it. This topic is particularly important in a wide variety of aerospace applications, which often rely on multiple imaging sensors whose information should be merged, or on imaging devices carried by manned or unmanned vehicles. Such applications range from flying object detection to tridimensional reconstruction by using aerial or satellite images to drone automatic navigation, to change detection for area monitoring to georegistration by ground-to-aerial image matching.

### Tutorial: Dealing with rotation matrices and translation vectors in image-based applications: A tutorial

#### Abstract

Rotation matrices are a convenient and intuitive way to describe algebraically the relative orientation of multiple cameras or of the same camera shooting from different points of view. However, the definition of a rotation matrix is prone to intrinsic ambiguity, which often leads to a mismatch with the physical rotation one wants to describe, even if the definition is mathematically correct. This is a common source of errors whenever it is required to compute a rotation matrix from camera orientation data, or vice versa, to recover such data from a given rotation matrix. This tutorial aims to describe and solve the main factors that generate the ambiguity in using rotation matrices and to permit dealing with them properly both in theory and in practice. Through a detailed analysis of these factors, which ranges from basic mathematical aspects to the notation used to refer to them, it is shown how to avoid errors in the algebraic description of the relative orientation of different cameras by means of rotation matrices. This work is followed by another contribution, in which the interaction between rotation matrices and translation vectors (used to describe the shifts between pairs of cameras) is also analyzed, and a recommendation on how to define a common reference system coherent with a camera (a crucial aspect to model the camera acquisition geometry) is given. The two contributions jointly embrace the entire description of the relative acquisition geometry of images taken from different points of view and provide a complete and error-free methodology to recover it or to extract useful data from it. This topic is particularly important in a wide variety of aerospace applications, which often rely on multiple imaging sensors whose information should be merged, or on imaging devices carried by manned or unmanned vehicles. Such applications range from flying object detection to tridimensional reconstruction by using aerial or satellite images to drone automatic navigation, to change detection for area monitoring to georegistration by ground-to-aerial image matching.
##### Scheda breve Scheda completa Scheda completa (DC)
2019
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
File in questo prodotto:
File
ATutorial.pdf

solo utenti autorizzati

Tipologia: Versione finale editoriale
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.04 MB
Utilizza questo identificativo per citare o creare un link a questo documento: `https://hdl.handle.net/11568/991496`