Paul M. Khoury
A personal engineering portfolio webpage inspired by Wikipedia, the free encyclopedia.
Predicting User Intent from EMG Signals
The primary objective of this project is to develop a machine learning (ML) model capable of accurately interpreting surface electromyographic (sEMG) signals to classify user intent, with applications in smart prosthetic control. sEMG signals capture muscle activation potentials non-invasively through electrodes placed on the skin. They offer a promising but complicated means of decoding voluntary limb movement. This technology forms the foundation for “smart” prosthetics, enabling artificial limbs that can truly act as natural biological limbs as far as actuation is concerned.
This research addresses key limitations in today's commercial prosthetic systems, which typically need manual setup, have limited movement capabilities, or don't work well for different users. Using supervised learning and improved signal analysis methods, this project develops an automated approach to recognize specific hand gestures from raw muscle signals. While earlier studies have shown good results with custom-designed features and simpler classification methods, this work compares traditional and newer machine learning approaches, tests how well they work across different situations, and carefully reviews assumptions made in previous research.
The ultimate goal, which is beyond the scope of this report, is contributing to the development of systems that can work in real-time and accommodate the full range of human motor abilities. If fully realized, this technology could seamlessly interpret “thoughts” controlling the performance of fine motor actions such as typing or sewing.
A comprehensive literature review informed key aspects of the study design. Kok et al. evaluated multiple feature extraction methods (MAV, RMS, DWT) and compared different classification models (SVM, KNN, Naïve Bayes) for anatomical motion prediction. Their work demonstrated the efficacy of statistical features like root mean square (RMS) and mean absolute value (MAV) for EMG classification, reinforcing the decision to apply time-domain statistical measures for collapsing temporal structures into formats suitable for time-independent classifiers.
Shakya and Ranjitkar explored convolutional neural networks (CNNs) and principal component analysis (PCA) for enhancing pattern recognition in forearm biomedical signals, providing insights into using multiple data forms to train classifiers. Their work with the GRABMyo dataset offered a useful reference point for data collection methodology.
Additionally, a review by Mhiriz et al. synthesized current challenges in sEMG-based classification, including inter-subject variability and noise robustness. Their paper addressed common issues like low signal-to-noise ratios and suggested methods to combat these problems using various models, including Support Vector Machines and Random Forests. This comprehensive overview guided preprocessing strategies and model selection for the project.
The dataset used originated from the UC Irvine Machine Learning Repository and was collected via the Myo Thalmic Bracelet, a commercially available (company is now defunct) sEMG acquisition device with eight EMG channels and consistent electrode placement. Ample data was collected and provided for analyses.
The project began with thorough data preparation and cleaning. Millions of raw data points were refined into a more manageable dataset by removing noise and calculating statistical features from each channel. The data was then standardized and visualized to better understand the patterns among different hand gestures.
Starting with a support vector machine model as a baseline, the project evaluated performance using standard accuracy metrics. This approach drew on techniques identified in the literature review. Later, additional features were tested and performance was finally compared with a neural network model.
Results showed the neural network achieved 94% accuracy, outperforming both the published benchmark and the baseline model. These findings confirm that even relatively simple statistical features from muscle signals can produce highly accurate gesture classification when paired with the right models. However, questions about how well these models work across different people remain open and will be explored in future work.
This project demonstrates the feasibility of creating intelligent prosthetic control systems using machine learning to interpret sEMG signals. The report details the process and the results.