Abstract:
Existing HMI (human-machine interface) systems suffer from issues such as limited commands, complex operation, and restricted task capabilities, preventing effective expansion into multi-dimensional motion control for robotic arms. This paper introduces a method for controlling robotic arm movements based on a wearable hybrid HMI. This method combines various signals, including electrooculography (EOG), head posture, and speech from the user, transforming them into control commands, thereby enabling continuous two-dimensional (2D) and three-dimensional (3D) motion control of the robotic arm at any angle. 10 participants complete tests involving command output, 2D target tracking, alphabetic writing, and 3D object grasping. The results indicate that the blink-generated commands of the proposed system have an average accuracy of 96.67%, an average response time of 1.51 s, an average information transfer rate (ITR) of 142.53 bit/min, and an average false positive rate (FPR) of 0.05 event/min. Additionally, the root mean square deviations of target tracking along 2 different routes on a 2D plane are 0.12 and 0.14 (normalized), while the average trajectory efficiency of 3D object grasping is 92.65%. The control performance of the system is comparable to manual control. The experimental results verify the feasibility of using a hybrid HMI for achieving efficient motion control of robotic arms and its potential application in assisting upper-limb mobility functions.