What is the solution for communication delay and distance issues that plague messaging platforms such as iMessage, Facebook Messenger, WhatsApp and others? The existing solution used by many of these apps (“read receipts”) are divisive and disliked by many. And there is a general lack of features to encourage timely response and provide consensual transparency between users.
Starting with an examination of existing solutions, I formulated a project plan to study users firsthand. By conducting user interviews and analyzing my findings, I created a clear-cut set of solution requirements. Essentially, the app should do the following:
Using paper prototypes and continuous iteration via usability testing and user feedback, I arrived at my final design. One of the key learnings during these iterative steps was the potential to expand upon the problem space. I introduced additional functionality that would give the sender greater control over timing with scheduled messages.
During this process, I also refined the way that status was reflected to the message receiver while the sender was preparing to respond. This struck a balance between indicating that the other person would respond without overpromising on a response.
The key output of the project is an interactive prototype, implemented through the iOS iMessage app. New features, including flagged conversations, remind me later, send later, and contextually aware receipts, were introduced to solve latency and distance issues in communication.
*Note: I completed this project in the fall of 2017, shortly after the release of iOS 11.
PG&E (Pacific Gas & Electric Company) is a major energy provider, servicing millions of customers in California. They are focused on helping their customers save money and energy by means of dozens of programs and initiatives. This means hundreds of unique email communications, which were being design and developed by over a dozen disparate creative agencies.
With the PG&E Email Design System established a unified design language for PG&E’s marketing materials. Drawing inspiration from their digital presence and guided by email usability principles, we created this living design system.
Operationally, the PG&E campaign team saw huge gains in efficiency and cost savings. The team was able to deliver 3x the amount of campaigns for the same previous level of effort.
With the Design System, PG&E was able to have a truly mobile approach for email for the first time. The modules were designed to support clear, concise messaging with a focus on actionable content.
PG&E customers responded with significantly higher engagement rates across the board. Some messages, when updated to the new email design system, saw click engagement increases up to 4x compared to old campaigns.
The future growth of the design system is founded upon a need to understand:
For the design system to continue to be successful, it must reflect the needs of the users as defined by these categories.See design system documentation →
Pardot is an email service prodvider from Salesforce, designed specifically for B2B email campaigns. After recently getting to know the interface, it seemed like an ideal candidate for a usability study because of how it had been integrated into Salesforce.
As of today, a Salesforce customer with Pardot can access it from within their instance by searching for “Pardot” in the app switcher. While this provides a high level of convenience, my goal with this study was to understand what impact that has on the user experience.
For my quick-and-dirty usability test, I recruited someone unfamiliar with Pardot. (In this case, it was my wife, because she met the criteria and was immediately accessible and available). With agreement from my participant, I recorded the interview. I asked her to think aloud while completing a couple of basic tasks.
First, I asked her to find the page used to send emails. Second, I asked her to check the metrics of any or all email campaigns. Both of these tasks represent key functionalities of Pardot (email sending and reporting).
For the purposes of this study, the qualitative description of why she took each action was more valuable than the quantitative evaluation of whether or not she was successful.
The key findings for this study were that the doubled navigation, with the lightning navigation horizontally across the top and the Pardot navigation vertically along the side, are the sources of considerate user confusion (especially for new users).
Further testing could be performed to determine what level the split navigation affects experienced users, given how much it would require them to toggle back and forth between navigation panes.
Additionally, the sizing of the iFrame cuts off significant parts of the interface, limiting the visibility of what is happening at any given time. This also hinders navigation speed.
The primary recommendation is to re-built Pardot to be truly “natively” lighting. This would be re-building some parts of the app to use lightning tabs rather than the lefthand panel. Incidentally, a few months after completing this report, Salesforce announced that “lightning had struck Pardot” at Dreamforce, executing precisely the key recommendation I provided.
As a full product re-build like this is not always technically feasible right away, I also recommended an intermediate strategy. This would include first adjusting the iFrame to fit snugly to the screen (100% of viewport width and 100% of viewport height, minus the lightning nav), to reduce double vertical scrolling issues.
Additionally, slightly revising the navigation would resolve many of the usability issues. When Pardot is nested into a Salesforce instance, use CSS overrides to hide the Pardot sidebar and add Lightning tabs instead which load each respective navigational page.
Student Center, or SIS (Student Information System), is the web app used by Indiana University to allow students to search for, register for, and manage their courses. It also provides them with additional resources such as information about their academic advisors and access to transcripts.
Since the first day as a student at IU (well before I even knew what “Interaction Design” was), I was highly aware of how miserable my experience was using Student Center. This was confirmed by every conversation I ever had with fellow students about the software. Because of this, I was insatiably curious to discover the source of this displeasure and determine what changes could be made to improve it.
This was a more involved usability study which aimed to identify the source of specific usability studies, anticipate the impact of said usability issues in the general population, and provide a series of both generalized and specific recommendations which could improve the Student Center experience. I approached it as neutrally as possible, so as not to allow my own negative experience flavor the results I got from research participants.
To better understand how users interact with the system on a regular basis, I used two research methods: 1:1 interviews and contextual inquiry. For the interviews, I wrote a script in advance with questions to gain insight about the experience of specific users. I recruited two users who were current students, and with their permission, recorded the interviews. For the contextual interview, I recruited one current user, and again recorded while asking them to perform their standard activities of searching and registering for classes.
With the aide of my recordings, I generated detailed notes from both the interviews and contextual inquiry. After combing through these, I highlighted key ideas that were present throughout both studies. Each key idea was translated into a sticky note, which allowed me to create an affinity diagram, organized by area of the app affected and color-coded by positive, negative, or neutral sentiment.
My findings translated the affinity diagram to statements about the system as a whole. These too were color-coded by sentiment, allowing for what is currently working versus what needs to be changed to clearly emerge. Some of the key findings include:
This is a very reduced summary of the full findings. You can read the full report (Phase 1: System Discovery) for complete details.
Based on the findings, I established highly specific system requirements that an updated system should address. This includes establishing a strong visual language and hierarchy, introducing a global navigation, and allow users to compare classes, among a number of other requirements. Read the full report for the complete list of recommendations.
The goal for the second part of the report was to drill down into the specific usability issues and collect quantitative data to determine their impact and identify potential solutions. It highlighted the frequency of errors and their causes.
For this usability study, I recruited four participants for thinking aloud sessions. They were a mix of users who were both experienced current users of Student Center, and users who had never seen it before.
I outlined two specific tasks in advance, and asked each participant to complete them while thinking aloud. With pre-defined parameters for success or failure, I was able to grade each user with a pass, fail, or incomplete on every task. Because I also recorded notes and recorded each session with user permission, I was able to specifically identify the course of success or failure.
Using my quantitative data, I was able to generate a confidence interval of the actual completion rate of each task in the general population. For this calculation, I used the adjusted-Wald method due to my relatively small sample size.
For both tasks, only 1 out of 4 users were successful, meaning that I can be 95% confident the actual completion rate is between 3% and 71%. The adjusted completion rate for both tasks was 37%. That’s my best estimate for how many actual users would be able to complete each task. By increasing the number of users who attempted each task, we could begin to reduce that range, giving us a more accurate prediction.
Additionally, using my collected data, I was able to determine that there is an 1% chance the completion rate exceeds 70% in the general population. Both of these metrics indicate that there are major usability issues facing these extremely central tasks. And a great deal of all users, including experienced ones, have difficulty with completing them.
Using the notes I gathered for this section, I came up with a number of specific recommendations to fix the discovered issues. This included allowing users to click on the weekly course overview on the homepage to get to class details, redesigning controls to meet users’ expectations or previous experiences, and introducing a universal shopping cart that can be accessed from anywhere. Read the full report for the complete list of recommendations.
Content Builder is a tool in Marketing Cloud for creating Email content. It includes drag-and-drop features to combine content blocks and create modular emails.
This began with an extensive list of usability issues.
My prototype set out to demonstrate how to improve the current state of:
As an avid user of Quip, I couldn’t help but think about how to solve some of the issues that I encounter in everyday use. While Quip an excellent tool for team collaboration, it’s inherently messy for a user working on several distinct teams at once. And furthermore, while it has its own folders that organize Quip docs and spreadsheets, it doesn’t permit the storage of any external (non-Quip) files. With this project, I prototyped a new mobile app, centered around team collaboration, with an integrated file system, all built around working with teams.
To better understand how to approach the design of the app, I tried to forget about Quip altogether. This was not a redesign of Quip but a look at how to solve the problem of team collaboration with a fresh start. My research of the problem space revealed that while many enterprise apps work together to solve the problem (Slack, G Suite, and Basecamp to name a few), none do file storage, team communication and documents in the same place.
Through a process of user interviews and analysis, I established a set of requirements for the design of the app. Some of key takeaways and recommended features included:
Using paper prototypes and an interaction flow map, I created an initial prototype. With some quick user testing, I was able to identify areas that needed improvement. For example, I created greater distinction between pinned files and team folders. I also honed interface copy and icon selection to clearly and concisely communicate the action that tapping certain buttons would have.
With these in mind, I created a dynamic interactive prototype, which includes all core screens of the app. After a final usability test, I ended with some final recommendations for improvement. This includes ensuring the same button is not used for multiple actions in different contexts, to avoid confusion. Additionally, encourage safety by adding confirmations before performing destructive actions.