Cross-Modal Knowledge Distillation for Human Trajectory Prediction in Virtual Reality
Distillation de Connaissances Intermodales pour la Prédiction des Trajectoires Humaines en Réalité Virtuelle
Résumé
Scene context informing on spatio-temporal interactions between people and other entities significantly improves accuracy of activity recognition and motion forecasting tasks, such as human trajectory prediction, but is difficult to obtain. Virtual reality (VR) offers an opportunity to generate and simulate diverse scenes with contextual information, which can potentially inform real-life scenarios. We design a teacher model leveraging heterogeneous graphs constructed from VR scene annotations to enhance prediction accuracy. This ongoing work proposes cross-modal knowledge distillation (CMKD), transferring the knowledge from the VR-constructed graphs to a student model that uses scene point clouds. Preliminary results show the potential of CMKD to transfer contextual information that significantly improves the prediction accuracy of the student model. Scene context informing on spatio-temporal interactions between people and other entities significantly improves accuracy of activity recognition and motion forecasting tasks, such as human trajectory prediction, but is difficult to obtain.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |