Raspberry Pi Object Detection

The Car Detection Project looks for motion in your driveway. If motion is detected, a picture is taken, and the NanoNets API will look for cars in the image using my model. If a car is detected, the user is notified via text, and the image is uploaded to Google Drive.

Engineer

Anshul J.

Area of Interest

Computer Science

School

Silver Creek High School

Grade

Incoming Junior

Final Milestone

For my final milestone, I was able to set up the Twilio API, which sends texts to the user, and the Google Drive API, which I used to upload the image onto Google Drive. Anytime the motion detector detects motion in the driveway, a text message is sent to the user, telling that a car has arrived onto the driveway. In addition, the text also gives a link to Google Drive, where the image has been uploaded, so that the user can easily gain access to the picture.

Twilio API

These text messages were able to be sent as a result of the Twilio API. To get the Twilio API working, I first needed to download all of the necessary packages. After that, I had to input all of the security credentials, the Account Sid and the Authentication Token. After that, I was able to get a trial number, which I will use to send the messages. After successfully sending a text message, I also made sure to include the link to the Google Drive so that the user can easily switch between apps.

sms
drive

Google Drive API

After I got Twilio working, I looked to get Google Drive integrated with my model. I decided to use the Google Drive API. I first installed all of the necessary packages onto my Raspberry Pi. I then created a project with the Google Drive Console, and obtained the OAuth2 credientials for it. The OAuth2 credentials had the client ID and token, among other things. All of this is basically for security reasons. After that, I put the credentials in the folder with the code to upload an image. After I sign in, images can now be downloaded onto my Drive!

Second Milestone

My second milestone was to start implementing my idea for the object detection. I first used the NanoNets API service so that I can build my model. I uploaded about 100 images, and annotated them myself. After that, I trained the model and tested it out on my own cars. After that, I went on to set up the Motion sensor. I made a circuit so that when the motion sensor detects motion, an LED blinks. After that, I tested out the motion detector with my model in my driveway, to see if it can recognize and annotate my car.

Motion Detector

To help set up the motion detector, I used my breadboard to help make a circuit where if the motion detector detects motion, then the LED will start blinking. If you hold the motion detector with the pins facing you (for my sensor), the ground pin is the left pin, the signal pin is the middle pin, and the power pin is the right pin. The ground pin eliminates excess power in the circuit.The signal pin is the pin which transmits information, in this case, it is transmitting wether motion is being detected. I connected the power pin to the power pin from the Raspberry Pi, to help turn on the LED when motion is detected, and used a resistor to help limit the power the LED takes in, since the LED can only take a certain amount of power. After setting up the circuit, I also had to modify the sensitivity to what works best for me and my model.

Testing Issues 

One of the biggest issues I had while testing my model was the fact that it was hard to angle my camera and motion detector because of how thin it is. To help solve this problem, I used a card board box as the exterior for the model, with small cutouts for the Camera Module and Motion Detector on one side, and the Power and HDMI cable on the other side. I also used tape to secure the Camera Module. Using the box, I was able to get a much better angle on any car that enters my driveway.

First Milestone

For my first milestone, I first set up the Raspberry Pi. I used the Raspberry Pi Imager to write the SD card, which I installed onto the Pi. The Raspberry Pi Imager installed all the needed software needed to go with the Raspberry Pi.  After that, I installed the heat sinks, to prevent overheating, which can happen extremely easily, seeing as the Raspberry Pi is such a small computer. I then connected the Raspberry pi to a power source and used the HDMI cable to connect it to my monitor.   After that, I installed the camera, an important component of my project. After I installed the camera, I ran several programs to set up, which included learning how to see the camera preview, how to take a picture with the camera, and how to take a video with it too.  

Pretrained Model

I then made a Nanonets account, so I can obtain the pretrained model, which I then ran on the Pi. I went with the furniture detection model, which can detect beds, tables, cabinets, etc. The model will first take a picture with the Pi Camera, and then the NanoNets API will find the furniture objects in the picture. Then, boxes are drawn over the furniture, and is returned back to you. The model also returns the different types of furniture in the image and how sure that the object is furniture(on a score from 0 to 1).

Leave a Comment

Start typing and press Enter to search

Bluestamp Engineering