top of page

Humor Bot

HowdyAI • Slack • Social Connections

Project Overview

 

Humor Bot is an extension for Slack that injects humor into group conversation, plays icebreaker games with new teams, and facilitates user study tasks. My partner, Dana Daniels, and I were tasked with making a chatbot that could humorously interact with participants in user studies. Our client was a Cornell researcher that wanted to see if adding an autonomous chatbot to his research studies would improve the experience of the subjects. We used Howdy AI and Slack as our developing and deployment platforms, allowing for quick alterations and easy access for the client. Ultimately, we created three chatbots that preformed various tasks for the client to further experiment with. 

 

By The Numbers 

81%

of students think humor is appropriate speaking with a stranger.

96%

of students think humor helps them learn in a class setting. 

92%

of students think humor is appropriate speaking with a coworker.

76%

of students think humor helps make awkward situations easier.

Data Collection 

​

Because of the experimental nature of the project, my teammate and I first had to find out what college-aged students found funny and how that humor could translate into a work setting. We set up gathering this info via a Google Poll that ran for two weeks. We got 26 responses that helped inform our creative decisions later. Overall, the results on what kind of comedy the participants liked were mixed; there wasn't one show, genre, or comedian that we found dominated the data and so we had to design the bots responses based on a multitude of comedic angles. Luckily, there was consensus on what situations and associates with which comedy is viewed as appropriate. This showed us that we were on the right track with the project and that we shouldn't expect much dissonance from the participants.  

Development and Prototyping 

​

In lieu of building a stand-alone system from scratch, my teammate and I used Howdy AI and Slack as development platforms. Howdy AI gave us the ability to quickly make changes to the bots without needing to worry much about debugging, build times, and future alterations. This was especially important for our client as he wanted to be able to make changes to the system as needed but had no coding background. There were moments in the building process where the client and the team wanted to include features beyond the scope of Howdy AI's standard functionality. For these features, we added additional scripts to the ones provided by Howdy AI. 

Iteration 1: Interrupting Bot  

​

For the first bot to be tested, we created a chatbot that interrupted the chat to tell jokes on command, give unsolicited input, and chime in when triggered. This setup was a simple word recognition algorithm that looked for predetermined strings in the user message and then respond from preloaded responses. For example, the screen grab on the left shows an actual conversation where one user triggers the bot with the word "Bus." The bot then interjects with an aside about being a terrible bus driver. 

​

We found that while the interrupting bot did work in select cases, users rarely triggered the responses. The user tests we ran for this iteration were with one tester and either my teammate or me as a confederate posing as another tester. We found that the majority of the triggered responses came from us, the confederates. To address this, we made the testing task more specific and added trigger words previously found in other user's chats. Ultimately, the client was happy with this change and we saw a positive uptake in the number of user triggered responses. 

​

Iteration 2: Icebreaker Bot  

​

One way to have users interact with the chatbot more was to have the bot play an interactive game before the users started the test task. This also addressed the issue of the bot interrupting during important conversations (feedback that we got in the previous test). For this version of the chatbot, we programmed the bot to play a MadLib, a popular children's literary game, before the task. The bot asked for a place, noun, verb, and adjective, then input those words into a story structure. For this example, the user input "China," "Sassy," "Run," and "Dog" in the provided example. We found that users were able to complete the task more quickly, but didn't have as strong of a connection to the bot. They had more or less forgotten about it by the time the task was over.  

​

Iteration 3: Task Delivery Bot  

​

The final version of the bot took the feedback from both user tests and tried to make a system that was useful, unobtrusive, and still funny. To address these points, we made a bot that would deliver tasks predetermined by our client for his research. The bot would deliver these tasks in the same way the researcher would do in person, but adding witty comments along with the task. This bot worked by the user telling the system when it was ready to move on to the next task and the bot listening for its trigger phrase. Because of time constraints, this bot was never tested with participants, but was demonstrated for the client. 

​

​

Project Conclusion 

​

The research nature of the project made it so that even if the bots were complete failures, there was something to be learned in the process. We found that the subjective nature of humor made it challenging to hard code a chatbot that understood the context, timing, and delivery of the jokes it would respond with. Despite this, the responses were either positive or neutral overall to the first iteration. The second iteration made it so there was never a misplaced interjection, but users felt the bot and the task were disconnected. Finally, the third bot addresses both of these issues, but was never tested.

​

The researcher who headed this project has continued to use this bot for his studies. A new team is currently in the process of adding new triggers to the system for the sake of responsiveness and likelihood of generating a response from the system. Though I am aware of the fact that the bot is still in use, I don't have access to the data generated from current tests or iterations. 

 

​

bottom of page