Raspberry Pi Object Following Robot
My project is a robot that is controlled by a Raspberry Pi. It can take a picture of any object, and follow/track it as it moves. The robot will make sure to keep a constant distance between the target object and itself.
Engineer
Taren P
Area of Interest
Electrical Engineering
Computer Science
Aeronautical/Aerospace Engineering
Software Engineering
School
Los Gatos High School
Grade
Incoming Freshman
Third Milestone
For my third milestone, I added movement to track the target object, and I resolved the random pauses that occurred while running the script. I began by adding movement to the robot and testing the forward, backward, right, and left motion. After I got basic movement working, I began to code the tracking movements. When the target object is to the right of the middle pixels in the frame, the robot should turn right, similarly for the left. For the forward and backward motions, I used an ultrasonic sensor to measure the distance between the target object and the robot. If the gap became too large, the robot would move forward, and if the gap became too small, the robot would move backward. After testing the movements, the motors kept on overcompensating. They were turning too far to the right or the left of the object. The robot would turn too fast and not recognize that it had passed the target object. I tried slowing the motor speed, but the motors didn’t have enough power to move the car. I solved the problem by having the motors turn in short bursts. This would allow the motors to turn the robot slightly, and after it would turn slightly, the raspberry pi could process the next frame. This would allow for the raspberry pi to know where the target object is at all times. I also resolved the random pauses in my script by using a digital interrupt. I used a series of print statements in my script to figure out where the pause was occurring. I figured out that my script was getting stuck in a while loop, where a certain section of my code was getting repeated infinitely not allowing for the rest of the script to run. The setup of the ultrasonic sensor code caused the infinite loop. I removed the loop and put a digital interrupt. This makes it so instead of continuously checking for a pulse and potentially pausing the script; it would just wait for a pulse instead. The digital interrupt stops the script until the ultrasonic sensor receives its pulse. This is how I worked around the while loop inside of a while loop problem.
Second Milestone
My second milestone was building my car chassis, setting up a reset button and indication lights, CAD modeling my ultrasonic sensor/picamera mount, and improving camera speed. I first planned out a schematic on the layout of my chassis. I built and wired the chassis for my robot so I could begin to attach the components. For the ultrasonic sensor and picamera to work, they have to sense the target object. I figured that I could design my own mount in Fusion 360 so that both modules can be level with one another. I accomplished it by placing the camera on the ultrasonic sensor. Although this method worked, I was skeptical about how this would affect accuracy. I ultimately designed the mount so that the ultrasonic sensor is on the bottom, and the picamera is on the top tilted downwards at a six-degree angle. I discovered that there was about a four-second camera latency. This latency was heavily affecting my robot’s accuracy. For the robot to track something, it has to keep the object in the video frame, but with four seconds of latency, the target object would have already escaped the frame before the robot noticed. I searched throughout my code looking for unnecessary lines that may be adding to the time delay. I found a time.sleep(0.00001) placed before the ultrasonic sensor sends out its pulse. This line allows time for the sensor to prepare before the pulse is sent. The 0.00001-second delay adds a lot of latency. Since the loop is being run multiple times per second, the 0.00001-second delay is multiplied, turning into a larger delay. I also ran into another problem while testing. I experienced random stops when I ran my code. It would just pause and stay paused until I stopped the script. I will address the solution in my final milestone.
First Milestone
My first milestone is setting up and installing the packages required for the project, connecting the camera module to the Raspberry Pi, familiarizing myself with the functions used in Opencv, and developing a masking software for tracking any object. For setting up my raspberry pi, I flashed Debian on my micro SD card. I then installed Opencv. To get to know how to use Opencv for my project, I started to play around with capturing video. I wrote a script to capture video and create a white mask on objects with a red color. After I had completed the basics of tracking the same object, I decided to modify the script to track any object. To do this, I needed to take a photo of the target object and extract the colors of the pixels from the picture to create a range. This high to low color range is what is used to track the object. The software will then create a mask on any objects within the color range. Simple right… Not really. I ran into some problems while doing this because RGB images are very colorful, meaning that there is a broad spectrum of colors to be chosen. All those colors can affect the way my script runs. The only way to fix this problem is by converting the image from RGB to HSV. HSV stands for hue, saturation, and value. HSV images are a lot flatter with less different colors. After converting my RGB image of the target object to HSV, I was able to extract the color values of each pixel in BRG format. The problem is that there are many different hues in the picture since there might be a few background objects, and to remove the background colors I decided to just take the one hundred center pixels. These pixels would always have the target object in it as long as the target object is mostly in the center of the camera’s view. Using an algorithm that I created, the script will sort through those one hundred pixels to find the lightest and darkest BGR values. Those values would then be stored in two arrays, one meant for the light values and one meant for the dark values. Those arrays for the range that is needed in order to track the object. I could then use this to track any object based off of its color.