This research position was awarded to me by the Natural Sciences and Engineering Research Council of Canada (NSERC). Here, my main focus was to design a robotic control method for potential prosthetic use. This involved the use of surface electromyography (sEMG) sensors, which can read muscle readings from the user's skin.
I developed an sEMG-driven soft-robotic hand control algorithm with integrated real-time sEMG sensors. To do this, I developed a PyTorch hybrid LSTM/CNN neural network for time-series classification, achieving a 91% accuracy. I then integrated the setup with ROS2 Foxy and the UR5 robotic arm, enabling precise robotic control and object grasping from raw sEMG signals.
The goal of this thesis was to be able to let a robot “see” an object, identify it, and figure out the best way to grab it. Seemingly a simple process, this involved neural network driven image segmentation and 3D data classification, with a touch of inverse kinematics.
I developed a perception system using Mask R-CNN on LiDAR RGB-D images to isolate objects and create 3D point clouds. This system enables the classification of each object’s RGB-D image with an accuracy of 98%. Additionally, I created a deep learning model that predicts optimal grasping directions and techniques. This model successfully identifies the best approach to grasp or pinch an object based on its geometry and orientation, achieving 93% accuracy, vastly outperforming 2D CNNs trained on the same dataset. Finally, I used a UR5 Robotic Arm in conjunction with ROS2 control, programming the motion of the robotic arm and hand for precise and efficient operation.
Sebastian Levy - Portfolio