recycling sorter

Glasses of different colors melt at different heats, posing an expensive issue for recycling companies which might wish to reuse the material. Instead of attempting to sort, melt down, and reuse the glass, many companies leave it all mixed together and and crush it to create “glassphalt,” which is used in paths. However, this means that glass products cannot be turned into other glass products but must end their lifetimes after being “recycled.”
There are other difficulties in sorting trash and recycling; for example, if one accidentally puts just one trash object into a bag of recycling, that whole bag must be thrown out, which is both unsustainable and defeats the purpose of recycling.
I wanted to use object detection with deep learning to automatically sort through recycled glass. The primary part of my project was using Raspberry Pi 3 to learn object detection and the secondary part was a small physical component, a claw arm, which could model what the program might do on a larger scale. Although this program will be capable only of sorting glass, the technology is also applicable to other forms of recycling sorting.

Engineer

Anika H

Area of Interest

Technology for Urban Sustainability

School

Brearley School

Grade

Incoming Junior

Demonstration

Over the course of this project, I learned a lot about technology for sustainability. I’ve been interested in sustainability for a while; urban sustainability in particular because I’ve grown up in a large city. It was a great experience for me to be able to bring multiple interests together with this project; machine learning, sustainability, and robotics, to name a few. I found glass recycling in particular to be an issue which I don’t often hear brought up. In fact, recycling in general is treated as something that happens in the background rather than an important, and improvable, part of how we take care of the environment around us.

First Milestone

My first milestone for my glass sorting system was to set up the Raspberry Pi and camera and run a pre-trained model. Although I faced challenges setting up the RPi, I was ultimately able to create a program in Python which sent an image to NanoNets, where it would be processed by a pre-trained object detection model and then returned with an identification to be printed to the console. At first the input was an image I’d pulled from the internet which I had to download and call specifically in the code. However, after re-purposing a program I made which could take photos using the RPi camera, the RPi would take a photo of a physical object, such a knife, and return with its identification based on the pre-trained model.

Second Milestone

For my second milestone, I created my customized model on NanoNets, which could tell the difference between different colors of glass bottles, both whole and broken. The colors I have trained it for include green, brown, clear, and blue, but more colors can be added in the future. I found and hand-annotated almost 200 photos, and may add more as I try to increase the accuracy rate. Although I was at 90% accuracy at the time I made this video, I had reached 95% by the time of my demonstration.

Full Build Plan and Journal

Start typing and press Enter to search

Bluestamp Engineering