The kitchen can be a messy place. “Google Abuela” is a solution. “Google Abuela” is a voice activated kitchen assistant that provides audio and visual cues to chefs. “Google Abuela” guides users through selecting meals, the ingredients needed for their chosen meal, alternative ingredients if needed, as well as the various steps that make up the chosen recipe.
“Google Abuela” was created for chefs who often find themselves with messy or preoccupied hands while in the kitchen
In order to prototype and test the “Google Abuela” system, I worked in a team of four, with three of my classmates from my UX prototyping class. Within this team, I helped with the creation of a test kit, the construction of the visual cues, and performed the role of moderator during the usability testing portion of the process.
Our group started out by brainstorming what possible prototypes we could create based off of the assignment instructions requiring the prototyping an application for a voice-operated assistant, a gesture recognition platform, a vision-impaired navigation aid, or a chatbot or text messaging app. We ended up choosing a voice-operated assistant through talking about possible struggles that occur within our daily lives. We discussed how the kitchen can be a messy place and that it can be difficult to know what to cook or remember to eat during the busyness of the day. We talked about how our grandparents would remind us to eat, telling us that we looked thin and must eat their food. From there, we talked about creating a “Google Abuela” system that would remind users to eat, help users cook, and even deliver the ingredients users would need in order to create the meal they selected.
After we brainstormed the general idea of our voice-operated assistant we set about trying to figure out how we would prototype and test the system. We began by white boarding our thoughts and decided that users should be able to view the recipe and ingredients in addition to having the system help users chose recipes and guide users through the recipe.
In order to create easy to view visual cues, we decided to create a visualization on an iPad interface. One of the members of our group offered to create a website of the system that could be used for prototyping and I created pages that would appear in order to supplement the user’s kitchen experience. Ultimately, my designs were not fully used due to a miscommunication among the group; however they are included below in order to represent the original intended visual cues.
For the purposes of this project, we implemented the “Wizard of Oz” behavioral prototyping technique and decided that we would utilized Google Hangouts, so that the “Wizard” that would be operating the iPad screen and play the audio for “Google Abuela”. Additionally, Google Hangouts was selected so that the “Wizard” would be able to interact more with the user by being able to hear and view them. We tested the Google Hangouts screen sharing function and created audio files that would be utilized as the system's voice, guiding the user through the recipe.
We were able to recruit one of our team member’s housemate’s boyfriend. He didn’t know about our prototype and the “Wizard of Oz” technique that we implemented, so he was a perfect participant to accurately test the usability of our system. For the usability test, we had to become a little bit flexible and change some of our methods for cueing and gathering information. We ended up turning off the camera on the iPad, so that our participant would be unable to see himself. However, this made it so the “Wizard” had to respond only to verbal cues, rather than also being able to view the participant and program the system’s audio files based on what the participant was doing. Thus, our note taker messaged the “Wizard” throughout the test, explaining what the participant was doing and what the “Wizard” could possibly say. Additionally, we found that the “Wizard” had to respond to impromptu voice commands such as “How many minutes are left for cooking the meat?”. The “Wizard” did not have a timer on them and had to make up responses for how long something was being cooked for. All in all, our usability study went rather well. We were able to perform the test in an actual kitchen, which helped simulate the realness of the prototype and our prototype worked well enough to be utilized for usability testing.
Below is the video we created of our usability test. The participate willingly signed a consent form allowing him to be audio and visually recorded and gave us written permission to utilize his usability test within our video.
From the usability testing, we discovered several key findings. To begin with, we found that the visual and audio cues needed to be improved. The participant wanted to view where they were at in the recipe and what ingredients were needed and our website prototype did not allow for both of the functions to be visually interpreted at once. Additionally, there were occasionally some lags with the audio, which hurt the realness of the simulation. Furthermore, we found that our system can only go so far in enhancing the cooking ability of a chef. For instance, our system can only instruct users on measuring a teaspoon rather than actually, physically helping them measure a teaspoon. In addition, we discovered that there is the potential for participants and users to be rather shy when speaking to an inanimate object. The inanimate object is foreign and can lack certain qualities that make up human interactions, which can result in user’s decreasing their interactions and voice commands with inanimate objects.
Based on our key findings from our usability testing, we found several improvements that could be made to “Google Abuela”. There is the possibility for a time function to be included within the system so that the user will be able to view how much time is left, in addition to hearing the audio cue from the system. Additionally, we found that we needed to prime the user more and find a better way to hide the interface, so that it would appear more realistic and aid in “Wizard of Oz” aspect of behavioral prototyping. For instance, for future usability tests, we would ask participants to open up a “Cooking App” and allow the app to start up, so that the participants would believe that the prototype is an actual app.
We were able to bring several different pieces together in order to create a working prototype. The prototype performed most of the functionalities that we brainstormed and we were able to recruit a participant to test out our prototype. It was interesting for me to watch our participant interacting with the prototype. Initially he seemed rather unsure of what to say to the system and would ask me what he should say or what direction he should go. However, as the test progressed, the participant became more relaxed with the system and began speaking to the system without having to be prompted.
One of the difficulties of this prototype was the short time frame. It was difficult for our group to meet and work on the prototype as well as find time to test the prototype. If we were to conduct the prototype again, I would try harder to urge us to meet together more and test the prototype amongst ourselves more. We ran into several surprises right before our usability test and had to change severalthings on the fly, due to lack of communication amongst the group. It would have been beneficial if we had tested the system out, in an actual kitchen, with everyone performing their usability roles, before we approached our participant and asked him to utilize our system.