top of page

Task Page Enhancements

Rocket Mortgage - March 2023

Task Manage V2.png
Clients

My Role

Product Designer — Interaction Design, Visual Design, User Flows, Rapid Prototyping, UX Research

Overview

Rocket Mortgage aims to provide seamless digital experiences for home buying and refinancing. However when certain users have more complex loan profiles, various tasks can interfere with their mortgage process.

With a focus on raising NPS scores for purchase clients, I sought out ways to reduce wrongful document submissions. Through qualitative research we learned that a new task was needed to help users fulfill an impossible ask from loan agents. This new task led to a 4% decrease in revision requests against our control. The test involved breaking out the content of the individual tasks into sub-pages to help convey the requirements of each task clearly. 


I then designed an update to overhaul the entire task management experience to reflect this new scalable pattern to further reduce revision requests for other tasks.

THE END EXPERIENCE

We created a simpler mortgage process 

By focusing on the most problematic tasks users endured, we unraveled insights that led to a more efficient task management pattern that reduced revision requests.

THE PROBLEM

Our clients were tired of repeating themselves...

Leading up to this project baseline research established one of the main detractors of net promoter scores for home purchases stemmed from submitting the same documents multiple times during the mortgage process.

Paint points.png

Document re-submissions came from revision requests.

It was clear that submitting the same documents was a pain point for our users. What did this look like digitally? They came in the form of revision requests. Troubleshooting comments by our loan team that would accompany tasks that were previously submitted by our users.

Deposit Documentation Revision Request 2.png

Was information being overlooked on the to-do list?

Using GlassBox (a session replay software) we observed users actively clicking the "read more" buttons to see the requirements within each task on their mortgage to-do list. We had a few hypotheses regarding why revision requests were still happening if a majority of users were analyzing the content. 

Sub-page Opportunity

To create more personalized experiences we could introduce instruction content into sub-pages for each task. 

Heuristically we wanted to reduce cognitive load.

Content Overload

Each task had a 180-character limit until the rest of the content would be hidden. Instructions could easily be overlooked.

Overwhelming Task List

Not only was task content overwhelming, but the amount of tasks on the page itself also contributed to cognitive overload.

Deposit Documentation Revision Request 2 (Read More).png

There was an opportunity, it was time to make a paper trail. 

Focusing on a solution that reduced revision requests was on the table. To rationalize our upcoming process I created a metric tree with my product manager. This framework conveyed how our team's (undefined) solution was going to ladder up into key performance indicators, that could help achieve company-wide goals.

For example, if we were able to enhance the most revision requested task, we may see a decrease in the amount of revision requests which would decrease the average amount of days to help a client close on their home.

Metric Tree.png

What were our key performance indicators?

Revision Request Volume

The amount of revisions requested by an operational member for a client to correct a mistake made on a submitted task.

Application to Conditionally Appoved

The metric that tracked how many users who started their mortgage application, and finished all their tasks prior to underwriting.

THE CHALLENGE

Before implementing change, there were a few rules.

Constraints limited my approach, but good communication between our operations, tech, and product teams helped us maneuver toward a realistic solution.

Revisions had no qualitative data

Although we had quantitative data reports of how many tasks were being revision requested, we didn't have qualitative data on why they were happening.

Our new design system had to wait

Our origination dashboard could not utilize our company's new design system due to backend limitations that supported our legacy systems.

Legacy systems controlled logic

Tasks were powered by legacy software that limited how information could be manipulated. 

Editing content had oversight

The content within the tasks was managed by varying operational teams. Changes required their approval. 

What was the most problematic task our users faced?

To accurately pursue this problem, I reached out to a point of contact from the data intelligence team. On a monthly basis, they created reports that revealed which tasks purchase clients struggled with the most. At the time "Deposit Documentation" had been the most problematic task amongst most mortgage types for the last 3 years.

Deposit Documentation Rank.png

Deposit Documentation was a task that requested users to explain or prove how varying deposits were transacted in their banking statements. The task stemmed from anti-money laundering laws to assure all deposits utilized for the mortgage were legal.

When asked why this task was so problematic, it was a dead end.

Our legacy software wasn't capable of tracking qualitative themes on why these tasks were revision requested. Our loan team also made custom entries to troubleshoot each problem, so there weren't uniform answers on how clients could fix their problems. To update features to start tracking this qualitative data it would take excess amounts of time and budget. Qualitative data did not exist for me to extract and synthesize. 

Data Intelligence.jpg

CROSS FUNCTIONAL OUTREACH

Help was waiting around the corner.

We had ideations involving different split tests to run. We knew the current task page was outdated and needed sub-pages to accommodate the content-dominated task cards. Without qualitative data on the main struggles behind these revision requests we hardly had direction on where to take our designs next.

That is until we remembered a few capabilities GlassBox had. 

GlassBox List.png

If there were HTML tags, data could be tracked.

One department of our data intelligence team managed reports on GlassBox. During weekly office hours we were told that if any objects in the live environment had HTML tags, the data could be tracked. This sparked my idea about aggregating all the custom messages in revision requests to the primary key of the task it was attached to. (using their defined HTML tags)

Thus creating the first qualitative revision report.

The GlassBox team spent 2 weeks creating this report for our team. Once it was created we were able to filter tasks by their primary task ID, and see scraped text of feedback our loan team was giving to our users. The report could even be exported as an Excel sheet to extract qualitative insights on varying themes of why each task was revision requested. 

GlassBox.png

A NEW TASK

We finally could see why these tasks failed our users.

Our appointed data scientist was able to synthesize our data and define major themes of why users failed to complete this task on their first attempt. 2 major themes were the majority cause.

Clarify their submissions

Most revision requests simply needed clarification on what certain transactions were for.

Incomplete documents

The next theme comprised of users who submitted documents that did not meet requirements.

We asked for clarity but didn't provide the right tools to capture user responses.  

The main discovery was that when this task was sent to a user, the loan team didn't always need a new document to review a transaction. Sometimes they just needed a response from the user to clarify what specific payments from their statements were for. We were able to capture sessions of people being explained this during online chat threads between users and loan agents. 

Deposit Revision Request.png
online chat session.png

Our approach for validation

I designed a new task that addressed the missing functionality for users to simply respond. We named the task "Deposit Explanation".
If the user needed to provide a written response to the task, a component would be available to collect the statement. 

We decided to utilize a sub-page for the task to avoid having the text box hidden behind the 180 character text limit on the task page.

Deposit Explanation.png

A reduction in revision requests was achieved.

After 2 months of launching Deposit Explanation we saw a reduction in overall revision requests by 4%.

FUTURE PROOFING OUR SYSTEM

It was time to define and apply learnings.

As we were planning for the future, the learnings validated a few design patterns we wanted to implement across our entire task management system. 

Sub-pages will help clients observe task details and reduce cognitive load on the to-do list page.

This allows us to create personalized experiences based on a user's loan profile.

Flow Chart.png

ADDITIONAL ENHANCEMENTS

We created a robust system that could scale.

I saw a clearer path to raising et romoter scores in the purchase experience. Our team started ideating a future task management system that utilized the core learnings of our split test for Deposit Documentation. We had design reviews with our engineering and product partners to maintain awareness of the future state we wanted the task management system to become. 

Task status indicators

Giving clear status indicators for our tasks helped users orient themselves on when and why they had to complete tasks.

Current Tasks.png

Daisy chained success state pages

To encourage uses to continue with their tasks, after every task submission we made a success page that had a call to action to advance to the next task on their list.

Success Page.png

An "in review" section to monitor progress

The current task system conveyed that every task that was completed was "finished'. I created an "in-review" section of the task page to help users understand that a task was not completely finished until a loan agent marked the task complete. 

In Review.png

ENDING NOTE

Setting up others for success.

As we were closing in on the new designs of the task management system, Rocket Mortgage had a department-wide reorg. In the midst of that transition, I was transferred to the lead generation team. 

The work was on the verge of being finished and it felt bittersweet to leave the team before seeing the features go live. What felt great was knowing that in the process of getting the task management system to an ideal state, we fully communicated pivots and new learnings with our tech and product partners. 

Our ability to gather qualitative research to influence our design rationale was a career moment for me. This project helped expand my approach to attaining a deeper understanding of my end user. Which is why I believe this project is one the most influential projects within my career to date.

Key Learnings

- 3rd party tools can create qualitative research reports.

- Legacy systems need maintenance and careful audits to create enhancements.

- Use small feature releases to validate ideas for future roadmaps.

bottom of page