top of page


A gesture-controlled navigation system for a car.


NAVIT is a map navigation gesture interface that allows users to interact with digital devices using simple hand gestures and voice without touching the phone screen to help drivers to focus on driving without distraction.

Tools :

Green screen

Adobe Dimension






3D modeling

User Research

Usability Test

​Green screen video


Time line :

Sep - Oct 2021 

5 weeks


Maps have become a major part of our lives in helping us to navigate to different locations with ease and reliability. However, the basic navigation design requires the driver to touch the phone screen when adding a destination or changing a destination while driving. This behavior could lead to fatal accidents if the driver doesn't pay attention. 

How might we perform various functions without directly manipulating navigation while driving?


How might we help the driver drive safely while driving?



The Product Goal

In this project, I designed a gesture interface for the navigation app. The goal is to recognize certain hand gestures that can perform various functions without direct navigation so that drivers can focus on driving without touching the interface.

Final Solution & Features

Mobile application + Car navigation

See the route list & Scroll down the list

Hold up three fingers pointing to the left and

slowly lower them to scroll down the list

Select the list

Hold up one finger in place to select the list

Pan around the map

Hold up two fingers and move your gesture 

up, down, right, and left to pan around the map

Zoom in the map

Stretch two fingers to zoom in on the map

Zoom out the map

Put your fingers together to zoom out on the map

Explanation Video

Design Process


💬Secondary Research

Research in the field of human-computer interaction has shown that the use of hand gesture improve the intuitive, natural, and ergonomic interface design. The use of gesture is a common way of communication in human conversation and is also meaningful during the communication between human and computer.

According to the study, interactive maps allow users to generally change their point of view to meet their personal needs and search for additional information. As such, the usefulness of the map navigation interface varies greatly depending on the interaction style and operation. Map interface operations are critical for designing usage environments in which the operations directly affect the performance of map navigation. Choose a few simple tasks, including pan, rotate, zoom, tilt, and play tour to reduce complexity and save time.

Sources: SUsers’ Preferences for Map Navigation Gestures Y. Y. Pang, N. A. Ismail

💬Observation Gesture Research

Gestures are elusive and hard to capture. We tend not to pay them much attention so that they can go unnoticed. In order to understand the general reaction and gestures, I analyzed the video to document the gestures. I was focused on what kind of gestures they used in this video and wrote down the meaning of the gestures.


✔️ Representing Gestures


 Step 1: Sketch a few ideas for different gestures for both functions.

 In order to explain which gestures to use,  the app needs to represent them clearly to the user. The goal is to find an effective way of representing a gesture.


Step 2: Representing gestures using Figurative and Stylized techniques.

Gestures can be represented in many ways - ranging from photo-realistic hands, through to simple line drawings or icons. More figurative representations may be less ambiguous. More stylized representations can be easier to integrate with the app’s branding.

Figurative and stylized.png

Step 3: List all gestures and representations for all functions.

Gesrures list.png

✔️ Test Echo and Semantic Feedback

Echo Feedback is unprocessed sensor data sent back to the user. It ‘echoes’ what the user is doing, without activating anything in the interface - such as activating a button. This kind of feedback tells the user the system is aware of the user’s interactions, and helps the user calibrate their physical movements.


Semantic Feedback is processed sensor data turned into visual/auditory/tactile feedback to the user. It tells the user they have successfully performed and completed an action.



In this process, I created a wireframe on how to give feedback when using gestures and received other people's thoughts on it.

🖌️ Gesture Feedback Wireframe


✔️ Task Flow

From the feedback, I decided to focus on making tasks flow to show how Navit would work with gestures. There are 4 main tasks that will generate the gestures and users' actions.

Gesture task flow.png

✔️ Wireframe

First, I started with a low-definition wireframe, checked the function, conducted a user test through it, and wrote down what to improve.


✔️ Usability Test


A virtual user test was conducted to check the user interface design and to see if the gesture works. Before the user testing, I created the user testing script to list my questions and storytelling about Navit. Then I created the top ten takeaways list.

User Testing Script

User testing script.png

Top Ten Takeaways

I made the top 10 take-out lists based on user testing. It includes what went well and what didn't go well in the user test. Throughout this, I was able to know which parts needed to be modified.

Top ten takeaways.png

✔️ Style Guide Process

The Style Guide process allowed me to try different colors, practice color combinations in prototypes, and see what color match is the best.

styile guide.png

✔️ Final Style Guide

From the style guide exploration, I found that the bight blue color matches the Navit theme because I wanted to show young and comfortable colors for the app.

final style guide.png

✔️ High Fidelity Prototype


After the initial rollout of the high-fidelity prototype, I made more improvements on the added stop while driving of the flow to solve these two problems:

  1. Users were hard to make three-finger at the same time

       Solution: Design the gesture that user can stretch their finger so that user doesn’t confuse about it. Since this is                               going to add a list section, adding a gesture icon is helpful to understand.

  2. It is difficult to notice how long the user has to show the gesture.

      Solution: Cleary shows the echo and semantic feedback. When the user shows the gesture to the screen, it shows                         the timeline and informs it in green when the interface recognizes it.

navit mock_edited.jpg

Gestures record 




Final video

💡 Reflection

The project brought the perspective of designing gesture interfaces, such as making gestures, testing them for other users, receiving feedback, and building knowledge on how to create a 3D background, and editing videos at the end of the project process.

Moving forward, I would like to improve the interface to make more details and test my final mobile design to see if any changes need to be made as well as work to fully flesh out the tablet version.

Next Case Study

Agent and Consumer to connect and assist throughout the real estate process.

Dashboard Design (Responsive)

bottom of page