
WAITER BOT
Alexa-Powered Conversational Interface for Service Robotics
DESIGN GOALS
Create a conversational interface for a robotic waiter that uses context.
Test the conversational system using real-world scenarios a catering service might encounter.
Account for limitations of robot's physical capabilities while providing a good user experience.
CONVERSATIONAL INTERFACE
10 custom intents were trained using the Alexa Skills Kit.
Responses were written on a custom web-based endpoint allowing for contextualized responses and user re-prompting.
For example, if the robot has more than three orders and is given a new order, it responds that it is busy and will return later.
CORE FEATURES
Waiter Bot can account for common requests present in food service.
Waiter Bot can take food and drink orders and will ask users if they'd like either food or drink if they only ordered one.
Waiter Bot tracks user orders by name to ensure accurate delivery.
Waiter Bot can account for orders that include items not on the menu and provide the user with available products.
KEY FINDINGS
Maintaining a running state of what the robot is doing allows for effective communication.
Clever use of slots helps create effective and natural sounding dialogues between robot and user.
Alexa applications cannot be programmed to speak unprompted, which makes the robot's state less available to users.
Alexa ASR is very sensitive to background noise which impacts user experience negatively in busier settings.

VR COMMUNICATION SYSTEM FOR ASYMMETRIC CO-OP PLATFORMER
DESIGN GOALS
Develop an in-game communication system for a co-op platformer where one player uses an Oculus VR headset and the other uses a traditional keyboard and mouse for controls.
VR controls need to be easy to use and discover, especially for a child.
VR controls need to account for locomotion mechanic of crawling.
GESTURE-BASED INTERFACE
Interface uses simple gestures such as waving, pointing, and clapping.
These are natural forms of communications and easy for users to discover.
User testing was performed by having a novice player explore until each mechanic was discovered.
This influenced button use and input modalities for gestures.
KEY FINDINGS
Input modality can be difficult for users to discover without prompting.
Likely due to divergence from other interfaces and less experience with VR.
Some gestures overlap in terms of input modality making the interface inconsistent in some cases.
