DOLL Labs ยท Internship
CompleteAutonomous Robot Navigation
TurtleBot 3 on ROS, navigating a maze with LiDAR while detecting colored ball targets through the onboard camera.
Getting the robot moving.
The first step was getting the hardware and software stack stable. TurtleBot 3 runs ROS on Ubuntu Linux, so that meant flashing the OpenCR board, configuring the Raspberry Pi, and verifying that LiDAR, camera, and motor nodes were all publishing and subscribing correctly on the ROS graph. Custom nodes were written in C++.
The navigation node consumed LiDAR data in real time. The sensor returns a 360-degree distance array, and the node split it into angular zones to detect walls ahead and on the sides. From there it issued velocity commands to keep the robot moving through the maze without collisions. As it moved, it also built a 2D map of the environment to track obstacles and visited areas.
Message passing with RabbitMQ
RabbitMQ handled communication between navigation and detection. When the camera node detected a ball, it published an event to a queue. The navigation node consumed that event and switched behavior accordingly, either moving toward the objective or continuing maze traversal.
TurtleBot in the maze. The right half shows the LiDAR map being built in real time.
Detecting the objectives.
Each colored ball in the maze was treated as an objective. The detection node subscribed to the camera stream and processed frames with OpenCV. The pipeline converted BGR to HSV, applied a color mask per target, and extracted contours from the masked result.
HSV worked better than RGB because hue is less sensitive to lighting shifts. In this maze, brightness changed a lot as the robot moved, so RGB thresholds drifted. HSV thresholds stayed much more stable. Once the node found a contour large enough to represent a ball, it used a bounding circle to estimate direction and rough distance.