Skip to content Skip to footer

Sense researchers make advances in controlling active cameras

[trx_title align=”center” color=”#000000″ top=”null”]

Sense researchers make advances in controlling active cameras

[/trx_title]

                   

Unlike fixed cameras, statically positioned to better capture a global view of the scene, active cameras or PTZs (pan-tilt-zoom) have a feature that allows them to move on the axis itself in the vertical, horizontal and zoom directions. In the scope of surveillance, PTZs are useful in viewing targets far away from the fixed camera, such as people, car plates, abandoned objects in public places, moving targets and objects the person is carrying.

Sense researchers Renan Reis and Igor Henrique Dias, advised by Professor William Schwartz, developed this month a new method for identifying people in a scene using a combination of fixed and active cameras. The method has as its main differential in relation to the literature the fact that is based on machine learning instead of the geometric calibration, thus, it is possible to overcome some restrictions that affect the calibration approach.

Developed specifically for people detection, the approach utilizes a fixed camera, its scene overview and an active camera, which are used to obtain more refined and specific information (such as face, hands, foot movement) in the scene, with higher resolution. “We have targets passing on the scene that need to be identified, this is our starting point,” explains Renan, the main responsible for the work.

How it works

Through machine learning, various situations are tested and the method is taught to retain the information needed after a series of training. For the training, two cameras were installed on the third floor of the DCC, near the Sense laboratory, 6 to 8 meters away from the scene – the ground floor of the DCC. Several tests were done with about an hour of filming each, of a person passing through all points of the scene. When the fixed camera has a target in sight, the PTZ camera has the task of locating it and centering it in real time. To do this, the method depends on the person’s spatial location information: the positions on the x and y axis of his bounding box and the size and width of the box.

Most of the methods in the current literature use geometric calibration to map the target in three dimensions, which needs a new calibration with every change of camera position. The method developed by Renan and Igor is independent of the position of the cameras and does not rely on calibration, requiring only that there are people passing in the scene so that the method begins to train by itself, from points of relation between the fixed cameras and PTZ.

“Without a doubt this is a great advantage of our method because it greatly facilitates remote monitoring since maintenance constraints and other situations that alter the positioning of the cameras in the environment are overcome” concludes Igor.

The researchers

Renan Oliveira Reis is a Ph.D. student in Computer Science and has mainly worked in education in the field of software engineering, computer science in education, learning objects, virtual reality, 3D Blender, image processing, pattern recognition, dynamic laser speckle and neural networks. Igor Henrique Dias is a undergraduate in Information Systems and researcher with CNPq support.

DCC associate professor and Sense leader William Robson Schwartz is the author of more than 100 scientific papers on topics such as computer vision, intelligent surveillance, forensics, and biometrics. He coordinates research projects sponsored by agencies such as CNPq, Fapemig and Capes, and R&D projects sponsored by companies such as Samsung and Petrobras.

Follow Sense updates through group’s social networks, Facebook and Twitter.

Leave a comment