Dataset.
2024
How2Sign: a large-scale multimodal dataset for continuous American Sign Language
CORA.Repositori de Dades de Recerca
doi:10.34810/data33
CORA.Repositori de Dades de Recerca
- Cardoso Duarte, Amanda
- Giró Nieto, Xavier
- Palaskar, Shruti
- Ghadiyaram, Deepti
- Haan, Kenneth de
- Metze, Florian
- Torres Viñals, Jordi
How2Sign consists of a parallel corpus of 80 hours of sign language videos (collected with multi-view RGB and depth sensor data) with corresponding speech transcriptions and gloss annotations. In addition, a three-hour subset was further recorded in a geodesic dome setup using hundreds of cameras and sensors, which enables detailed 3D reconstruction and pose estimation and paves the way for vision systems to understand the 3D geometry of sign language.
No hay resultados en la búsqueda
×
2 Documentos relacionados
2 Documentos relacionados
Digital.CSIC. Repositorio Institucional del CSIC
oai:digital.csic.es:10261/263337
Publicaciones de conferencias: comunicaciones, ponencias, pósters, etc (conferenceObject). 2022
HOW2SIGN: A LARGE-SCALE MULTIMODAL DATASET FOR CONTINUOUS AMERICAN SIGN LANGUAGE
Digital.CSIC. Repositorio Institucional del CSIC
- Duarte, Amanda
- Palaskar, Shruti
- Ventura, Lucas
- Ghadiyaram, Deepti
- DeHaan, Kenneth
- Metze, Florian
- Torres, Jordi
- Giró i Nieto, Xavier
Trabajo presentado en la IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), celebrada de forma virtual del 19 al 25 de junio de 2021, One of the factors that have hindered progress in the areas of sign language recognition, translation, and production is the absence of large annotated datasets. Towards this end, we introduce How2Sign, a multimodal and multiview continuous American Sign Language (ASL) dataset, consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth. A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation. To evaluate the potential of How2Sign for real-world impact, we conduct a study with ASL signers and show that synthesized videos using our dataset can indeed be understood. The study further gives insights on challenges that computer vision should address in order to make progress in this field. Dataset website: http://how2sign.github.io/, This work received funding from Facebook through gifts to
CMU and UPC; through projects TEC2016-75976-R, TIN2015-
65316-P, SEV-2015-0493 and PID2019-107255GB-C22 of the
Spanish Government and 2017-SGR-1414 of Generalitat de
Catalunya. This work used XSEDE’s “Bridges” system
at the Pittsburgh Supercomputing Center (NSF award ACI1445606). Amanda Duarte has received support from la
Caixa Foundation (ID 100010434) under the fellowship code
LCF/BQ/IN18/11660029. Shruti Palaskar was supported by the
Facebook Fellowship program.
UPCommons. Portal del coneixement obert de la UPC
oai:upcommons.upc.edu:2117/356423
Publicaciones de conferencias: comunicaciones, ponencias, pósters, etc (conferenceObject). 2021
HOW2SIGN: A LARGE-SCALE MULTIMODAL DATASET FOR CONTINUOUS AMERICAN SIGN LANGUAGE
UPCommons. Portal del coneixement obert de la UPC
- Cardoso Duarte, Amanda
- Palaskar, Shruti
- Ventura Ripol, Lucas
- Ghadiyaram, Deepti
- DeHaan, Kenneth
- Metze, Florian
- Torres Viñals, Jordi|||0000-0003-1963-7418
- Giró Nieto, Xavier|||0000-0002-9935-5332
One of the factors that have hindered progress in the areas of sign language recognition, translation, and production is the absence of large annotated datasets. Towards this end, we introduce How2Sign, a multimodal and multiview continuous American Sign Language (ASL) dataset, consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth. A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation. To evaluate the potential of How2Sign for real-world impact, we conduct a study with ASL signers and show that synthesized videos using our dataset can indeed be understood. The study further gives insights on challenges that computer vision should address in order to make progress in this field. Dataset website: http://how2sign.github.io/, This work received funding from Facebook through gifts to CMU and UPC; through projects TEC2016-75976-R, TIN2015- 65316-P, SEV-2015-0493 and PID2019-107255GB-C22 of the Spanish Government and 2017-SGR-1414 of Generalitat de Catalunya. This work used XSEDE’s “Bridges” system at the Pittsburgh Supercomputing Center (NSF award ACI- 1445606). Amanda Duarte has received support from la Caixa Foundation (ID 100010434) under the fellowship code LCF/BQ/IN18/11660029. Shruti Palaskar was supported by the Facebook Fellowship program., Peer Reviewed, Objectius de Desenvolupament Sostenible::10 - Reducció de les Desigualtats, Objectius de Desenvolupament Sostenible::4 - Educació de Qualitat::4.5 - Per a 2030, eliminar les disparitats de gènere en l’educació i garantir l’accés en condicions d’igualtat a les persones vulnerables, incloses les persones amb discapacitat, els pobles indígenes i els nens i nenes en situacions de vulnerabilitat, a tots els nivells de l’ensenyament i la formació professional, Objectius de Desenvolupament Sostenible::10 - Reducció de les Desigualtats::10.2 - Per a 2030, potenciar i promoure la inclusió social, econòmica i política de totes les persones, independentment de l’edat, sexe, discapacitat, raça, ètnia, origen, religió, situació econòmica o altra condició, Objectius de Desenvolupament Sostenible::4 - Educació de Qualitat
×
1 Versiones
1 Versiones
CORA.Repositori de Dades de Recerca
doi:10.34810/data33
Dataset. 2024
HOW2SIGN: A LARGE-SCALE MULTIMODAL DATASET FOR CONTINUOUS AMERICAN SIGN LANGUAGE
CORA.Repositori de Dades de Recerca
- Cardoso Duarte, Amanda
- Giró Nieto, Xavier
- Palaskar, Shruti
- Ghadiyaram, Deepti
- Haan, Kenneth de
- Metze, Florian
- Torres Viñals, Jordi
How2Sign consists of a parallel corpus of 80 hours of sign language videos (collected with multi-view RGB and depth sensor data) with corresponding speech transcriptions and gloss annotations. In addition, a three-hour subset was further recorded in a geodesic dome setup using hundreds of cameras and sensors, which enables detailed 3D reconstruction and pose estimation and paves the way for vision systems to understand the 3D geometry of sign language.
There are no results for this search