My research focused on a next-generation golf simulator that replicates real course conditions using a dynamically adjustable platform with nine degrees of freedom. Unlike traditional simulators, ours allows golfers to practice on realistic slopes and lies while receiving real-time swing analysis powered by computer vision. The system provides precise control over terrain adjustments, shot tracking, and performance feedback, making professional-grade training accessible to all players.
My role focused on mechanical design and system integration, including developing the scissor lift mechanism that enables smooth, precise adjustments for varied golf lies. I also worked on integrating wearable IMU sensors for enhanced swing analysis, allowing golfers to track detailed motion data alongside computer vision-based pose estimation. By integrating hardware design, motion tracking, and real-time feedback, we provide a more realistic and effective training tool for golfers looking to refine their technique in a controlled environment.
This research position was awarded to me by the Natural Sciences and Engineering Research Council of Canada (NSERC). Here, my main focus was to design a robotic control method for potential prosthetic use. This involved the use of surface electromyography (sEMG) sensors, which can read muscle readings from the user's skin.
I developed an sEMG-driven soft-robotic hand control algorithm with integrated real-time sEMG sensors. To do this, I developed a PyTorch hybrid LSTM/CNN neural network for time-series classification, achieving a 91% accuracy. I then integrated the setup with ROS2 Foxy and the UR5 robotic arm, enabling precise robotic control and object grasping from raw sEMG signals.
The goal of this thesis was to be able to let a robot “see” an object, identify it, and figure out the best way to grab it. Seemingly a simple process, this involved neural network driven image segmentation and 3D data classification, with a touch of inverse kinematics.
I developed a perception system using Mask R-CNN on LiDAR RGB-D images to isolate objects and create 3D point clouds. This system enables the classification of each object’s RGB-D image with an accuracy of 98%. Additionally, I created a deep learning model that predicts optimal grasping directions and techniques. This model successfully identifies the best approach to grasp or pinch an object based on its geometry and orientation, achieving 93% accuracy, vastly outperforming 2D CNNs trained on the same dataset. Finally, I used a UR5 Robotic Arm in conjunction with ROS2 control, programming the motion of the robotic arm and hand for precise and efficient operation.
Sebastian Levy - Portfolio
We use cookies to analyze website traffic and optimize your website experience.