The Issue
Checking out fresh produce in the grocery store is costly for the business and inefficient for the consumer. Slapping a sticker on 100,000 items and paying cashiers to work 10 check-out lanes requires lots of boring, tedious human work.
Our product could help automate the process of identifying items during checkout. We avoid using bar codes and relying on human labor, which translates to huge savings costs for our customers. Our product helps with the following:
- Reduce human hours needed for check-out
- Expedite and improve customer experience
- Reduce plastic waste involved with bulk food items
Stop By & Check out a working demo!
At our station you can do a live demo of our product yourself! Use our integrated scale and add items to your virtual cart. Come and try it for yourself!
Our mission:
Revolutionize the grocery checkout system with modern advances in computer vision
We identified a way to streamline the checkout experience that speeds up lines while reducing man-hours for grocery stores and chains. Our idea is simple but transformative. Using state-of-the-art computer vision object detectors, we automatically scan and classify groceries without the need for manual look-up. Our system is modular and can be easily integrated into existing grocery stores of any size, expediting the checkout experience for the customer while saving costs for businesses.
Modular Integration
Modular, easy integration into existing check-out systems
Lightweight Classifier
Lightweight classifier and detector using MobileNetV2, with robust accuracy
Increased Security
Increased security in self-checkout lanes with an additional overhead camera.
Technical Details
-
How did we build the deep learning back-end?
We compiled a selection of images from publicly available datasets across 6 common food categories with 1000 images each. Then, we used transfer learning on the MobileNetV2 architecture pre-trained on ImageNet and added our own output classification layer.
For object detection, we fine-tuned a pre-trained model from TensorFlow Hub on select images from the COCO2017 dataset. Again, we chose a MobileNetV2 backbone for lightweight inference and used it with the single-shot detector model(SSD). We deployed our models in Flask to create a REST-API, and build our demo application around it.
-
What tech stack did we use?
For mechanical, we used Fusion360 to design, Solidworks for simulation, Corel Draw for CAM, and Universals UCP to generate G-code. Within Fusion360, we created the engineering drawings and developed constraints between the different components of our assembly. From the constraints we plugged the material limitations into Solidworks in addition to Fusion to optimize material placement on our side members. Using Corel and UCP, we generated the paths for the laser to cut out the acrylic panels and etch the plastic decals.
We used Python for our backend, Flask for our REST API, Javascript and Bootstrap for our frontend. We used TensorFlow for our deep learning models. We used OpenCV for our image processing and object detection.
-
A brief explanation of our design process
Calculating the the view range of our camera, we used the specified 7mm focal length from the Galaxy S21 combined with a 16in diameter necessary to allow our 144 degree FOV to completely encapsulate the watermelon at a 18 inch height. To be able to have a circulate measurement area, our camera thus required a horizontal offset leading to the shelf style of the mount. Using Solidworks simulation to discover stability of the system, we propagated the connections down and reduced side component material volume.
-
How could this help reduce plastic waste in grocery stores?
Modern packaging and transport have been optimized for ease of checkout at the expense of greater waste. Most grocery stores opt to package fruit and other items in groups to avoid putting a sticker on every single item. With a self-checkout system that doesn't use stickers or an unweildy self-checkout UI, we eliminate the need to package things in groups rather than selling things separately.
-
How would we plan to deploy this product in the medium-long term?
Initial deployment: in smaller outdoor markets selling per piece or per pound. The lighter traffic and more single-item sales make these places ideal.
Later deployment: to grocery stores and convenience stores like Safeway, Costco, Stop & Shop. The larger volume of product being labeled by staff and checked out by customers would make our product most useful.
-
What changes could we make to our product in the future to expand its use cases?
The low hanging fruit would be to widen the range of scannable items to non-grocery products, and make the system more capable of identifying bundles of items. We could also improve the setup to add verification for stolen goods and false checkouts.
In the future, we could also integrate this technology on the shopping cart itself, and have customers pay for carts without ever having to go through it.