RESEARCH AND UX DESIGN

iMessage for iOS 12

Or read overview below↓

The Problem

What is the solution for communication delay and distance issues that plague messaging platforms such as iMessage, Facebook Messenger, WhatsApp and others? The existing solution used by many of these apps (“read receipts”) are divisive and disliked by many. And there is a general lack of features to encourage timely response and provide consensual transparency between users.

User Research

Starting with an examination of existing solutions, I formulated a project plan to study users firsthand. By conducting user interviews and analyzing my findings, I created a clear-cut set of solution requirements. Essentially, the app should do the following:

  • Provide transparency between the sender and receiver.
  • Require little to no effort on the sender and receiver’s end.
  • Allow for immediate feedback from the receiver to the sender (and vice versa, if applicable).
  • Encourage a quick response time.
  • Prevent messages from being forgotten.
  • Both sender and receiver should willingly opt-in (on each message send instance or for the entire conversation history).

Prototyping

Using paper prototypes and continuous iteration via usability testing and user feedback, I arrived at my final design. One of the key learnings during these iterative steps was the potential to expand upon the problem space. I introduced additional functionality that would give the sender greater control over timing with scheduled messages.

During this process, I also refined the way that status was reflected to the message receiver while the sender was preparing to respond. This struck a balance between indicating that the other person would respond without overpromising on a response.

The SOlution

The key output of the project is an interactive prototype, implemented through the iOS iMessage app. New features, including flagged conversations, remind me later, send later, and contextually aware receipts, were introduced to solve latency and distance issues in communication.

*Note: I completed this project in the fall of 2017, shortly after the release of iOS 11.

View full project →

DESIGN SYSTEMS

PG&E Email Design System

The Problem

PG&E (Pacific Gas & Electric Company) is a major energy provider, servicing millions of customers in California. They are focused on helping their customers save money and energy by means of dozens of programs and initiatives. This means hundreds of unique email communications, which were being design and developed by over a dozen disparate creative agencies.

The wide variety of design patterns before the introduction of the Framework

The Solution

With the PG&E Email Design System established a unified design language for PG&E’s marketing materials. Drawing inspiration from their digital presence and guided by email usability principles, we created this living design system.

SUCCESS METRICS

Operationally, the PG&E campaign team saw huge gains in efficiency and cost savings. The team was able to deliver 3x the amount of campaigns for the same previous level of effort.

With the Design System, PG&E was able to have a truly mobile approach for email for the first time. The modules were designed to support clear, concise messaging with a focus on actionable content.

PG&E customers responded with significantly higher engagement rates across the board. Some messages, when updated to the new email design system, saw click engagement increases up to 4x compared to old campaigns.

EVOLUTION

The future growth of the design system is founded upon a need to understand:

  • Subscribers as users. How the end users of PG&E’s email messaging utilize email as a touchpoint in the customer experience. By analyzing engagement and conducting usability studies, we can enhance what is included in the design system. And by introducing a user feedback touchpoint directly in emails, we can help PG&E to focus on message relevancy, making the design system’s purpose more precise.
  • Marketers as users. How the implementation of the design system, as a tool, serves the needs of marketers, designers and developers. By conducting user interviews and surveys with this group of users, we can assess what is working and what can be improved about the design system as a tool.

For the design system to continue to be successful, it must reflect the needs of the users as defined by these categories.

See design system documentation →

USABILITY RESEARCH

Pardot Usability Study

Or read overview below↓

Process Overview

Pardot is an email service prodvider from Salesforce, designed specifically for B2B email campaigns. After recently getting to know the interface, it seemed like an ideal candidate for a usability study because of how it had been integrated into Salesforce.

As of today, a Salesforce customer with Pardot can access it from within their instance by searching for “Pardot” in the app switcher. While this provides a high level of convenience, my goal with this study was to understand what impact that has on the user experience.

Data Collection

For my quick-and-dirty usability test, I recruited someone unfamiliar with Pardot. (In this case, it was my wife, because she met the criteria and was immediately accessible and available). With agreement from my participant, I recorded the interview. I asked her to think aloud while completing a couple of basic tasks.

First, I asked her to find the page used to send emails. Second, I asked her to check the metrics of any or all email campaigns. Both of these tasks represent key functionalities of Pardot (email sending and reporting).

For the purposes of this study, the qualitative description of why she took each action was more valuable than the quantitative evaluation of whether or not she was successful.

Results

The key findings for this study were that the doubled navigation, with the lightning navigation horizontally across the top and the Pardot navigation vertically along the side, are the sources of considerate user confusion (especially for new users).

Further testing could be performed to determine what level the split navigation affects experienced users, given how much it would require them to toggle back and forth between navigation panes.

Additionally, the sizing of the iFrame cuts off significant parts of the interface, limiting the visibility of what is happening at any given time. This also hinders navigation speed.

Recommendations

The primary recommendation is to re-built Pardot to be truly “natively” lighting. This would be re-building some parts of the app to use lightning tabs rather than the lefthand panel. Incidentally, a few months after completing this report, Salesforce announced that “lightning had struck Pardot” at Dreamforce, executing precisely the key recommendation I provided.

As a full product re-build like this is not always technically feasible right away, I also recommended an intermediate strategy. This would include first adjusting the iFrame to fit snugly to the screen (100% of viewport width and 100% of viewport height, minus the lightning nav), to reduce double vertical scrolling issues.

Additionally, slightly revising the navigation would resolve many of the usability issues. When Pardot is nested into a Salesforce instance, use CSS overrides to hide the Pardot sidebar and add Lightning tabs instead which load each respective navigational page.

Read full report →

USER RESEARCH / USABILITY RESEARCH

IU Student Center Usability Report

Or read overview below↓

Process Overview

Student Center, or SIS (Student Information System), is the web app used by Indiana University to allow students to search for, register for, and manage their courses. It also provides them with additional resources such as information about their academic advisors and access to transcripts.

Since the first day as a student at IU (well before I even knew what “Interaction Design” was), I was highly aware of how miserable my experience was using Student Center. This was confirmed by every conversation I ever had with fellow students about the software. Because of this, I was insatiably curious to discover the source of this displeasure and determine what changes could be made to improve it.

This was a more involved usability study which aimed to identify the source of specific usability studies, anticipate the impact of said usability issues in the general population, and provide a series of both generalized and specific recommendations which could improve the Student Center experience. I approached it as neutrally as possible, so as not to allow my own negative experience flavor the results I got from research participants.

Phase 1: System Discovery

Data Collection
To better understand how users interact with the system on a regular basis, I used two research methods: 1:1 interviews and contextual inquiry. For the interviews, I wrote a script in advance with questions to gain insight about the experience of specific users. I recruited two users who were current students, and with their permission, recorded the interviews. For the contextual interview, I recruited one current user, and again recorded while asking them to perform their standard activities of searching and registering for classes.

Results
With the aide of my recordings, I generated detailed notes from both the interviews and contextual inquiry. After combing through these, I highlighted key ideas that were present throughout both studies. Each key idea was translated into a sticky note, which allowed me to create an affinity diagram, organized by area of the app affected and color-coded by positive, negative, or neutral sentiment.

My findings translated the affinity diagram to statements about the system as a whole. These too were color-coded by sentiment, allowing for what is currently working versus what needs to be changed to clearly emerge. Some of the key findings include:

  • Navigation is not linear or consistent, leading to regular unexpected loss of progress
  • Searches cannot be saved and require a great deal of repeated manual data entry that slows the search process immensely
  • Finding the right classes requires a comparison with what’s already in the shopping cart and what classes are required, and currently there is no way to see all of these things in the same place

This is a very reduced summary of the full findings. You can read the full report (Phase 1: System Discovery) for complete details.

Recommendations
Based on the findings, I established highly specific system requirements that an updated system should address. This includes establishing a strong visual language and hierarchy, introducing a global navigation, and allow users to compare classes, among a number of other requirements. Read the full report for the complete list of recommendations.

Read phase 1 →

Phase 2: Usability Metrics

Data Collection
The goal for the second part of the report was to drill down into the specific usability issues and collect quantitative data to determine their impact and identify potential solutions. It highlighted the frequency of errors and their causes.

For this usability study, I recruited four participants for thinking aloud sessions. They were a mix of users who were both experienced current users of Student Center, and users who had never seen it before.

I outlined two specific tasks in advance, and asked each participant to complete them while thinking aloud. With pre-defined parameters for success or failure, I was able to grade each user with a pass, fail, or incomplete on every task. Because I also recorded notes and recorded each session with user permission, I was able to specifically identify the course of success or failure.

Results
Using my quantitative data, I was able to generate a confidence interval of the actual completion rate of each task in the general population. For this calculation, I used the adjusted-Wald method due to my relatively small sample size.

For both tasks, only 1 out of 4 users were successful, meaning that I can be 95% confident the actual completion rate is between 3% and 71%. The adjusted completion rate for both tasks was 37%. That’s my best estimate for how many actual users would be able to complete each task. By increasing the number of users who attempted each task, we could begin to reduce that range, giving us a more accurate prediction.

Additionally, using my collected data, I was able to determine that there is an 1% chance the completion rate exceeds 70% in the general population. Both of these metrics indicate that there are major usability issues facing these extremely central tasks. And a great deal of all users, including experienced ones, have difficulty with completing them.

Recommendations
Using the notes I gathered for this section, I came up with a number of specific recommendations to fix the discovered issues. This included allowing users to click on the weekly course overview on the homepage to get to class details, redesigning controls to meet users’ expectations or previous experiences, and introducing a universal shopping cart that can be accessed from anywhere. Read the full report for the complete list of recommendations.

Read phase 2 →

PRODUCT/UX DESIGN

Content Builder Lightning

Content Builder is a tool in Marketing Cloud for creating Email content. It includes drag-and-drop features to combine content blocks and create modular emails.

This began with an extensive list of usability issues.

My prototype set out to demonstrate how to improve the current state of:

  • Predictability. At times, actions have unexpected reactions that can be disorienting.
  • Safety. Actions often have irreversible consequences, which discourages discovery and makes for a hostile working environment.
  • Spacial mapping. Transitions and overlays make it difficult to visualize “where” the user is in the app.
  • Visibility. Labels and terminology are often unclear as to what they contain or what they represent.

RESEARCH AND UX DESIGN

Otter: Team Collab App

THE PROBLEM

As an avid user of Quip, I couldn’t help but think about how to solve some of the issues that I encounter in everyday use. While Quip an excellent tool for team collaboration, it’s inherently messy for a user working on several distinct teams at once. And furthermore, while it has its own folders that organize Quip docs and spreadsheets, it doesn’t permit the storage of any external (non-Quip) files. With this project, I prototyped a new mobile app, centered around team collaboration, with an integrated file system, all built around working with teams.

User Research

To better understand how to approach the design of the app, I tried to forget about Quip altogether. This was not a redesign of Quip but a look at how to solve the problem of team collaboration with a fresh start. My research of the problem space revealed that while many enterprise apps work together to solve the problem (Slack, G Suite, and Basecamp to name a few), none do file storage, team communication and documents in the same place.

Through a process of user interviews and analysis, I established a set of requirements for the design of the app. Some of key takeaways and recommended features included:

Prototyping

Using paper prototypes and an interaction flow map, I created an initial prototype. With some quick user testing, I was able to identify areas that needed improvement. For example, I created greater distinction between pinned files and team folders. I also honed interface copy and icon selection to clearly and concisely communicate the action that tapping certain buttons would have.

With these in mind, I created a dynamic interactive prototype, which includes all core screens of the app. After a final usability test, I ended with some final recommendations for improvement. This includes ensuring the same button is not used for multiple actions in different contexts, to avoid confusion. Additionally, encourage safety by adding confirmations before performing destructive actions.

See “Otter” prototype →