2025

Journal of Engineering and Applied Science

T-AMMFN: a Spatio-Temporal Attention-based Multi-Modal Fusion Network for gait data simulation and rehabilitation effect prediction in ankle injury patients

Yongzhong Zhang, Xiaoqi Meng, Chengcheng Jiang

School of Medical Technology, Shandong University of Engineering and Vocational Technology, Jinan, China

Keywords

deep learning, multimodal fusion, ankle rehabilitation, gait analysis, temporal attention, recovery prediction, sensor insoles, plantar pressure

Abstract

This paper presents a novel deep learning framework for gait data simulation and rehabilitation effect prediction in patients with ankle injuries. We propose the Spatio-Temporal Attention-based Multi-Modal Fusion Network (ST-AMMFN), which effectively integrates heterogeneous data from multiple sources including demographic information, clinical assessments, biomechanical measurements, and wearable sensor time-series data. The ST-AMMFN architecture features specialized encoders for different data modalities, a hierarchical attention mechanism that captures both temporal dynamics and modality importance, and a multi-task prediction structure that simultaneously forecasts rehabilitation progress, ultimate recovery level, and expected rehabilitation duration. We developed ST-AMMFN using real clinical data from 300 ankle injury patients for both rehabilitation outcome prediction and synthetic gait data generation to augment training datasets. Experimental results demonstrate that our proposed model significantly outperforms traditional machine learning approaches and state-of-the-art deep learning methods, achieving an RMSE of 0.219 and R2 of 0.871 in rehabilitation effect prediction, representing 13.8% lower RMSE and 5.2% higher R2 compared to the best baseline methods. The interpretability provided by the attention mechanism further allows clinicians to identify critical time points and influential data modalities in the rehabilitation process, potentially enabling more personalized intervention strategies. Our approach presents a promising direction for integrating artificial intelligence into rehabilitation medicin

Moticon's Summary

The researchers utilized Moticon smart insoles equipped with 16 pressure sensors to collect continuous, real-world data on plantar pressure distribution, foot loading patterns, and center of pressure during daily activities. These measurements formed a core part of the wearable sensor data modality, which the ST-AMMFN model used to capture the complex spatiotemporal dynamics of gait. The study found that for patients with severe injuries, the Moticon sensor data became increasingly important for predicting recovery, with the model assigning higher attention weights to this modality in those cases. By integrating this high-fidelity sensor data, the model achieved a high prediction accuracy (87.1% R2) and was able to generate realistic synthetic gait data that preserved clinically essential features like peak loading during the push-off phase.

Contact Us
Book a free online demo or use the contact form to get in touch
Newsletter
Subscribe to our newsletter for regular updates

You Would Like to Get in Touch?

Write us a message on product related questions or with regards to your application.  We are here to assist!


Schedule an Online Demo

Get a hands-on overview of our products, find the best choice, discuss your application and ask questions.

30 minutes

Web conferencing details will be provided upon confirmation

You are currently viewing a placeholder content from Google Calendar. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

Stay one step ahead!

Subscribe to our newsletter for the latest information on case studies, webinars, product updates and company news

The form was sent successfully.

You will be contacted shortly.

Get Support

Check our FAQ database for answers to frequently asked questions

Describe your issue in as much detail as possible. Include screenshots or files if applicable.