Fully automated image-based estimation of postural point-features in children with cerebral palsy using deep learning.

ROYAL SOCIETY OPEN SCIENCE(2019)

引用 9|浏览17
暂无评分
摘要
The aim of this study was to provide automated identification of postural point-features required to estimate the location and orientation of the head, multi-segmented trunk and arms from videos of the clinical test 'Segmental Assessment of Trunk Control' (SATCo). Three expert operators manually annotated 13 point-features in every fourth image of 177 short (5-10 s) videos (25 Hz) of 12 children with cerebral palsy (aged: 4.52 +/- 2.4 years), participating in SATCo testing. Linear interpolation for the remaining images resulted in 30825 annotated images. Convolutional neural networks were trained with cross-validation, giving held-out test results for all children. The point-features were estimated with error 4.4 +/- 3.8 pixels at approximately 100 images per second. Truncal segment angles (head, neck and six thoraco-lumbar-pelvic segments) were estimated with error 6.4 +/- 2.8 degrees, allowing accurate classification (F-1 > 80%) of deviation from a reference posture at thresholds up to 3 degrees, 3 degrees and 2 degrees, respectively. Contact between arm point-features (elbow and wrist) and supporting surface was classified at F-1 = 80.5%. This study demonstrates, for the first time, technical feasibility to automate the identification of (i) a sitting segmental posture including individual trunk segments, (ii) changes away from that posture, and (iii) support from the upper limb, required for the clinical SATCo.
更多
查看译文
关键词
cerebral palsy,deep learning,feature tracking,pose estimation,SATCo,video analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要