Details
Original language | English |
---|---|
Title of host publication | Proceedings |
Subtitle of host publication | 2013 IEEE International Conference on Computer Vision Workshops, ICCVW 2013 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 83-90 |
Number of pages | 8 |
ISBN (print) | 9781479930227 |
Publication status | Published - 2013 |
Event | 2013 14th IEEE International Conference on Computer Vision Workshops, ICCVW 2013 - Sydney, NSW, Australia Duration: 1 Dec 2013 → 8 Dec 2013 |
Publication series
Name | Proceedings of the IEEE International Conference on Computer Vision |
---|
Abstract
Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.
Keywords
- ESF, Hand gesture recognition, Random forest, Range sensor
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Computer Vision and Pattern Recognition
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings: 2013 IEEE International Conference on Computer Vision Workshops, ICCVW 2013. Institute of Electrical and Electronics Engineers Inc., 2013. p. 83-90 6755883 (Proceedings of the IEEE International Conference on Computer Vision).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Real-time sign language recognition using a consumer depth camera
AU - Kuznetsova, Alina
AU - Leal-Taixé, Laura
AU - Rosenhahn, Bodo
PY - 2013
Y1 - 2013
N2 - Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.
AB - Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.
KW - ESF
KW - Hand gesture recognition
KW - Random forest
KW - Range sensor
UR - http://www.scopus.com/inward/record.url?scp=84897551797&partnerID=8YFLogxK
U2 - 10.1109/ICCVW.2013.18
DO - 10.1109/ICCVW.2013.18
M3 - Conference contribution
AN - SCOPUS:84897551797
SN - 9781479930227
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 83
EP - 90
BT - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2013 14th IEEE International Conference on Computer Vision Workshops, ICCVW 2013
Y2 - 1 December 2013 through 8 December 2013
ER -