The AllerGenius app allows users with dietary restrictions to skip the necessary but tedious and time-consuming task of manually reading through a list of ingredients by digitally augmenting product packaging with approved or rejected labels. Using a phone's camera, the app recognizes a product in real-time, compares the ingredients list with the list of the user's restrictions, and places a checkmark or "X" on the product. With this system, we calculated that users were able to analyze important data 200 times faster than their previous method of manual parsing.
For this project, I decided to take a look at the practice of autobiographical design. I wanted to make a system that I as the designer had a personal connection to and would use myself. As a vegan, I found that I spent a lot of unnecessary time reading through lists of ingredients on packaged foods to see if they met a plant-based diet. This process of picking up, reading, and putting back products was annoying, time-consuming, and occasionally faulty. Similar to kosher, gluten-free, and cruelty-free labels, there are "Certified Vegan" labels on some health-conscious products, but not all vegan products. As a theoretical solution, I would love to be able to see this certified vegan label broadly displayed on all products that meet the criteria. Expanding on that idea, it would be just as beneficial to a user if they could get their own certification label based on their needs rather than having a vegan certified label. For instance, a package would have a "Terry Certified" label based on Terry's specific dietary needs!
The theoretical solution of having personalized dietary labels overlaid on products may seem impossible; however, using simple AR algorithms and image recognition systems creates an easy and effective solution to a common problem. By using a phone's camera, the system recognizes a product on a shelf, reads through the ingredients of that product, compares the ingredients to one's dietary restrictions, and digitally overlays either a checkmark or an "X" directly on the product in a fraction of a second. This image recognition system is constantly updated in real-time. This means there is no need for users to take single pictures of the product. Users can simply view products and their corresponding label as fast as they can move their hand.
Concept Model - Early Prototype
This model was originally based upon the idea of getting dietary analysis information from snacks inside of vending machines. Because it's impossible to get the details of ingredients of vending machine snacks without multiple online searches, the idea was born out of necessity over convenience. The original design uses a two-step system that first asks the user to input their restrictions and then brings them to the AR screen. The criteria included the ability to look for every potential restriction requirement a user may have from fat content to artificial dyes to religious standards.
Our presentation of this model was successful but raised several concerns that were mainly technical in nature. People wanted to know how the system would deal with glare on product packaging, the speed at which the system could recognize products, how we would get the nutritional information for the products, and how well the image recognition software could adapt to changing product design. For the most part, these concerns were out of our hands as designers, but we quickly learned that Vuforia, the image recognition software we used, accounts for these problems.
My partner and I quickly went into the programming phase of the project shortly after developing our concept model. First, my partner created a python script that gathers ingredient info from a search term based on a Walmart database. Using YouTube tutorials and parsing through APIs, we used Unity and Vuforia to then build a working prototype that modified the concept model. The new prototype has a singular screen where users can select their criteria from a scrolling bottom bar of options. The color scheme was also changed to sky blue, a color that users said reminded them of fresh air and cleanliness. Overall, the project consists of 31 trained target images of product boxes paired with their ingredient info. There are two C# scripts that gather input from the user, compare the info to the ingredient list, and then change the overlay image to a checkmark or an “X”. View the video below to see a sample of product recognition and classification.
Second Prototype - New UI
For the second iteration of the UI, the color scheme was changed to a lighter, more breathable layout. The simple outline icons were replaced with icons I designed using Adobe Illustrator. These icons were an exercise in golden ratio icon design where the curves of the icons are made from circles based on the Fibonacci sequence ratio.