This paper describes a novel method to localize the position and the orientation of smart cameras in a room using depth and angle information of an object(person) moving around in an area. The depth and angle information is processed by each camera belonging to the network. In this scheme the sensors are placed in the area with the requirements that the field of view (FOV) of each camera is at least a portion of another one. Each camera runs an algorithm to detect the person and stereo vision for its depth estimation. These relative measurements are sent to a central node to build the entire sensors map. Each smart camera executes the same algorithm. This makes the system scalable if the need arises to add more cameras to the network. The scalability is limited due to communication bandwidth requirements.
Self-Calibrating Camera Network using Angle and Depth Information
FANUCCI, LUCA
2007-01-01
Abstract
This paper describes a novel method to localize the position and the orientation of smart cameras in a room using depth and angle information of an object(person) moving around in an area. The depth and angle information is processed by each camera belonging to the network. In this scheme the sensors are placed in the area with the requirements that the field of view (FOV) of each camera is at least a portion of another one. Each camera runs an algorithm to detect the person and stereo vision for its depth estimation. These relative measurements are sent to a central node to build the entire sensors map. Each smart camera executes the same algorithm. This makes the system scalable if the need arises to add more cameras to the network. The scalability is limited due to communication bandwidth requirements.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.