Umfassende Service-Einschränkungen im Bereich Ausleihe ab 17. März!

Treffer: Interactive Robotic Testbed for Performance Assessment of Machine Learning based Computer Vision Techniques.

Title:
Interactive Robotic Testbed for Performance Assessment of Machine Learning based Computer Vision Techniques.
Authors:
P. B., NITHIN1 nithinpb180@gmail.com, R., ALBERT FRANCIS1 albertfrancis32632@gmail.com, CHEMMANAM, AJAI JOHN1 ajaichemmanam@cusat.ac.in, JOSE, BIJOY A.1 bijoyjose@cusat.ac.in, MATHEW, JIMSON2 jimson@iitp.ac.in
Source:
Journal of Information Science & Engineering. Sep2020, Vol. 36 Issue 5, p1055-1067. 13p.
Database:
Supplemental Index

Weitere Informationen

Computer vision, a widely researched topic over the years, got a shot in the arm with the arrival of high performance and cloud computing. Online and offline techniques for object detection, recognition and tracking have a huge impact in real-world applications such as video surveillance, biometric authentication and targeted advertising. With machine learning, conventional feature extraction based implementation has given way to the model based implementations. This demands high compute speed to keep up with complex trained models. Computer vision with Machine learning solved some of the traditional problems like image classification and is now offering new unique problems in image processing such as object tracking, object segmentation etc: Performance assessment of various computer vision applications in object tracking, when used with machine learning solutions, is a high priority. With this intent, we propose a robotic testbed for various computer vision applications such as face recognition, tracking, gesture detection, character recognition, etc: It has a hardware tracking system based on face detection and recognition. A fully functional robot with a table lamp design is made to work with these applications using multiple algorithms and their performance parameters are compared. Since a low compute power setup is used, the robot can work properly only on optimized implementations. Visual intelligence to recognize gestures and capability to read text were integrated onto the robot. [ABSTRACT FROM AUTHOR]