top of page

Expert Review & Usability Testing | Autodesk

Client
Autodesk, a corporation that builds design, engineering and entertainment software​
​
Project Impact

Autodesk attempted to take all our recommendations into account when creating a new and improved version of the AVA chatbot in Winter 2018.


Purpose

As part of our work in a Bentley graduate course on testing and assessment, our team wanted to understand the following:

  1. What are users’ perspectives on AVA?

  2. How do users understand and interpret AVA’s humanity?

  3. Which UI elements are useful or deterrent in helping users complete tasks?

  4. What were some views on AVA's conversational style?  

  5. What were users’ expectations around AI, chatbots and AVA?

​

Objectives

  • Review the AVA platform and gather input from users to identify the virtual agent's strengths and weaknesses

  • Provide recommendations for improvement

​

Note: I received permission from Autodesk to include this project in my portfolio.

​​

Screen Shot 2018-08-14 at 14.48.31.png

Process

1. Met with client to better understand the problem and the product

2. Set agreed-upon goals with client regarding the user's experience,

the business impact and the study

3. Decided which heuristic principles and severity scales were best

to use

4. Conducted individual and group expert reviews.

5. Created screener and recruited participants

6. Created consent form and moderator's guide

7. Conducted moderated usability testing

8. Conducted unmoderated usability testing

9. Triangulated results

10. Provided findings and recommendations to client

​

Tools

  • Qualtrics survey software

  • System Usability Scale

  • UserTesting.com

  • Rainbow Spreadsheet

​​

Participants

  • Moderated Usability Testing: 8 participants

  • Unmoderated Usability Testing: 2 groups (11-12 users each)

  • Unmoderated Impressions Testing: 2 groups (20 users each)

​

Project Duration

4 months

​

Group Members 

Kanika Ahirwar, Rivka Barrett, Devika Gupta, Reed Jones

​

My Role

  • Collaborated with my team on creating a proposal, screener, consent form and moderator's guide

  • Conducted an individual expert review and helped combine our findings into a group report

  • Moderated or took notes during both remote and in-person usability testing sessions

​​

Keywords: Expert review, heuristic evaluation, usability testing, triangulation

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

 

 

 

 

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Expert Review 

Each team member first conducted an individual expert review

​

We each adopted the “Pat persona" and downloaded a free trial of Maya software

​

We focused on the following tasks:

  • Downloads and upgrades

  • Installation and configuration of software

  • Account management and license management

  • Activation and registration

​

We then asked the AVA chatbot questions related to installing and managing Maya software

​

Heuristics 

  • Jakob Nielsen’s 10 heuristics (adapted for use on chatbots by Kevin Scott)

    • Sample heuristics:

      • Match between system and the real world - Chatbot should use words, phrases and concepts the user is familiar with, rather than technical or system-oriented terms

      • User control and freedom - Allow users to "undo" and "redo" options if the interaction goes in a direction they hadn't intended

​

Severity Scale

​

Severity Scope

​

 

Using a rainbow spreadsheet, we reviewed the individual expert reviews together to pinpoint any overlap in our findings and discuss areas of disagreement. 

 

We then compiled the most important findings into a group report.

Screen Shot 2018-08-15 at 16.08.54.png

Combining individual findings

Screen Shot 2018-08-15 at 17.23.28.png

November 2017. Click on the image to expand.

January 2018. Click on the image to expand.

Rainbow Spreadsheet (purposely blurred)

Screen Shot 2018-08-18 at 13.33.08.png

Overview

Pat is a professional in her mid-30s who works with CAD software and is managing her own product license.

She wants clear information on available products and features, as well as pricing

 

Pat only wants the information she needs and can use, without unnecessary details, and would like quick and easy access to support services when needed.

Persona: Pat

Usability Testing

Moderated Usability Testing

 

To support the findings from our expert review, we ran a moderated usability study with 8 users to examine users’ interactions with AVA and receive their feedback.

​

  • Sessions were 1 hour long and were run in-person or remotely

  • All participants were encouraged to think out loud

  • All participants were asked to complete the System Usability Scale (SUS) on Qualtrics.

8 users

Screen Shot 2018-08-15 at 18.00.58.png

We captured moments of delight, pain points, instances of providing assistance, and quotes.

Unmoderated Usability Testing

 

In order to confirm or deny our findings from the moderated usability testing sessions, we ran unmoderated sessions with two groups (12 participants each) on UserTesting.com.

​

  • Unmoderated sessions were 15-20 minutes long

  • The tasks we asked participants to complete were the same ones as in moderated sessions

  • We asked each group different questions at the end of each session (SUS and Autodesk internal questions)

  • Again, we asked non-Autodesk users to talk through their interactions with AVA

​

We also ran unmoderated impressions testing to gauge user expectations for AVA before and after interacting with the chatbot.

​

  • We ran unmoderated sessions with two groups (20 participants each) on UserTesting.com

  • Each session was 10 minutes long, and each participant completed 1 task

  • One group was asked about pre-task expectations before the task

  • The other group was asked about pre-task expectations after completing the task

​

We watched some selected videos of unmoderated sessions and took notes on examples that supported or contradicted our findings, as well as participants' responses to the SUS survey questions. In addition, I analyzed results from the simplified SUS questionnaire filled out by participants during remote unmoderated sessions.

​

​

​

​

​

​

​

​

​

​

Findings

We triangulated results from our expert review, moderated usability testing, unmoderated usability testing, and Qualtrics SUS survey data.

 

The Usability Testing results supported many of the findings from our expert review.


Key positive findings:

​

AVA has a welcoming and professional tone

  • Provides quick and relevant feedback in most cases

  • Refers users to sections of the website they may not find otherwise

  • 3 of 8 users reported being delighted by AVA's quick response time

 

AVA's conversational style UI resembles common instant and text messaging norms

  • Clear "enter" and "send" buttons

  • The use of speech bubbles while AVA is typing 

  • 5 of 8 participants found the speech bubbles a useful indicator that AVA was in the process of responding to their questions​

​​

AVA’s chat screen resizes to the size of the window and provides clear ’Start Over’ CTA

  • AVA’s name and avatar are always visible

  • AVA provides users with a quick feedback mechanism to rate the experience

 

AVA consistently provides the user with informational messaging

  • If the user is not satisfied, AVA provides the option to connect with a human agent by opening a form to the right of the window

  • When users are signed in, the form is auto-completed for them

​

AVA explains some confusing terminology that users may not be readily familiar with

  •  For example, users may not be familiar with the difference between "moving" and "transferring" a license, which AVA takes care to clarify

 

Participants showed higher pre-task expectations after interacting with AVA.

  • This suggests that user expectations of chatbots are low, and that these expectations can be leveraged to impress users with AVA's capabilities

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Summary of Recommendations

Manage expectations accurately

  • Explain to users what AVA can do without overselling her capabilities

  • Telling users to interact with AVA as if she is human causes them to ask questions in a way that she may not always understand

  • Autodesk should leverage users’ low expectations -- they will be more likely to be impressed with what AVA can do

    • We expect that AVA's interactions will also become more human-like as the chatbot continues to learn and undergo technical improvements 

​​

Learn to hold a conversation​

  • Some users remarked that AVA's conversational style was a little rigid and they sometimes needed to use the Start Over button in order to change a topic

  • In addition, the Start Over button was not always visible during interactions. The Start Over button should be moved to a location where it is always visible

  • AVA should recognize when the user refers to something AVA said earlier

​​

Define in-chat interaction patterns

  • These patterns indicate the actions users can take during their conversation with AVA

  • Buttons or bold text can help provide indicators for next steps, as long as they are used consistently throughout the interaction

What I Learned

  • People tend to have low expectations of chatbots' capabilities, and these low expectations should be leveraged to "wow" them with AVA's performance -- setting high expectations (i.e. being able to interact with AVA like a human) does not give them a realistic impression of her abilities

  • Triangulation and combining qualitative and quantitative data creates a more comprehensive picture of the user experience

  • Providing actionable, prioritized feedback in a digestible format allows the client to take away insights and apply them towards improving the product

bottom of page