EE Robotics Projects

Principles of Robotics Final Project

My group proposed an autonomous platform capable of random wandering and obstacle avoidance using a low cost LiDAR for our Principles of Robotics final project. In addition to wandering, we proposed a basic computer vision system to visually identify a known target and fire a nerf dart towards it. We had a substantially sized and capable chassis from one of our group member’s previous projects which was an ideal platform for this final project. Through this work we gained experience with closed-loop motor control, CV primitives such as blob detectors, and embedded development on an NVIDIA Jetson TX2 board.

My primary duties on this project were integration of a low-cost YP-LiDAR for obstacle detection and an omnidirectional camera for full 360 degree target identification capabilities. I set up the ROS environment on the Jetson TX2 and configured serial communications between the main board, the LiDAR and the supporting arduino dedicated for motor control. I created an autonomous wandering script which receives laserscan messages from the LiDAR node and produces independent track velocity commands to avoid obstacles. I based this behavior on the artificial potential field approach to robot navigation where every obstacle is treated as a repelling force on the robot and the net force determines the desired travel direction.

My simple OpenCV-based target detection routine utilizes a simple blob detector with an HSV mask tuned for a blue frisbee in our operating environment. The omnidirectional camera image used as input displays a full 360 degree view around the robot. One of the main benefits of this camera is I can identify the exact heading of any targets identified in the image without the need for any intrinsic calibration. Once the target heading is acquired, a simple PID controller routine rotates the robot in place until the Nerf blaster faces the target and a dart is dispensed.

Advanced Robotics Final Project

For the Advanced Robotics course in the Electrical Engineering department I proposed an independent final project involving visual robot odometry. My goal through this work was to gain experience working with existing visual odometry implementations from literature and to implement my own version.

For many autonomous platforms, odometry readings can prove to be an unreliable source for pose estimation information. Wheel slippage and variations in tire shape can ultimately reduce the accuracy of odometry-based positioning. By utilizing various sensors to observe the world around an agent, position estimates can be vastly improved over odometry alone. In this work, a visual odometry and mapping implementation utilizing an omnidirectional camera was created. The goal is to allow a mobile agent to accurately track its movements throughout the environment while constructing 3D map using visual features. The semi-dense visual odometry (SVO) implementation presented by Forster et al. was also assessed and compared to my custom implementation for pose estimation accuracy and mapping accuracy.

The video above shows both my custom visual odometry implementation and SVO tracking and mapping in real-time on a recorded movement sequence. One of the shortcomings of my visual odometry implementation is how scale is handled. Monocular visual odometry requires assumptions to be made about the scale of the environment. As features are tracked between frames and triangulated in 3D space it is impossible to know the exact translation scale between key viewpoints. Features triangulated between frames captured two meters apart and between frames four meters apart will have the same unit translation magnitude. To resolve this scale issue, some systems like the Intel Realsense T265 tracking camera (which actually has two cameras operating in redundant mono SLAM instead of stereo…) utilize inertial measurement units to resolve this translation scale. It is common for systems relying on monocular cameras alone to perform an initial scale calibration step to establish a unit scale that all subsequent triangulated features will be matched to. While the triangulated features do not match the scale of the real environment, they will be consistent throughout the sequence.

My visual odometry script does not match scale between keyframe feature triangulations in this manner. My implementation utilizes the odometry calculated from my custom base controller node which receives independent wheel velocities from the robot base. Using the translation magnitudes from these odometry messages matched with timestamped keyframes, I consistently scale the triangulated features in my custom visual odometry implementation. While this does leave my implementation vulnerable to translation inaccuracies from the wheel odometry, the observed rotational error in the prerecorded sequence is substantially reduced when compared to wheel odometry alone. In the sequence, the robot is directed from one cyan cube to another while passing through the two green cubes. These visual markers correspond to measured markers taped on the floor in the test environment. The wheel odometry shown as red arrows quickly becomes inaccurate once rotations occur. The green arrows representing my custom visual odometry implementation follow the expected trajectory much more accurately. I also visualize the triangulated features as white points which match closely to the overhead light box corners.

The following sequence with a white background shows the output of SVO with the same robot movement sequence. The environment reconstruction is much more accurate due to better handling of scale consistency. While the scale is consistent between the triangulated features for this implementation, the overall scale does not match the environment as there is no external scale information provided. If objects with known size were visually detected or wheel odometry integrated, the SVO implementation would have accurate environment scaling.

Overall this work was a success. I gained experience working with existing visual odometry implementations and deepened my knowledge on the subject greatly. While my implementation suffered from scale inconsistencies between sets of triangulated features, the improvements to rotational accuracy vastly improve the odometry capabilities of the iMHS platform. This project was a great experience in terms of furthering my OpenCV skills and my project management skills as this was one of the largest independent undertakings in my coursework.