Case Study

Scroll down

Scroll down

Case Study

Platform

Mobile-Native

Role

Product Designer

Team

CEO, SPE, QA

The Problem

Senior leadership shifted direction — Automation Became the New Priority

The audit routing tool was paused as SLT decided routing could be achieved through simple logics. Site-mapping inconsistencies and regulatory constraints prevented AI from generating reliable, repeatable coverage logic across all environments. we redirected focus from AI-driven planning to AI-driven automation — shifting the initiative toward optimizing in-the-moment audit tasks rather than pre-audit decision-making.

The Ask

Rebuild fast — without going back to research

After the routing tool hit scaling limits, we needed a new direction that could reuse existing insights. The challenge was to design a viable solution using what we already knew.

Design around the repository — not against it

The previous redesign had improved planning and visibility. SLTs saw value in keeping it — so the next tool needed to extend, not replace, what managers already used.

Extend the AI narrative — as a valuable marketing asset

The company had invested in AI as part of its brand. This tool needed to reflect that direction — something managers could rely on, and leadership could showcase.

What We Built

A redesigned audit workflow focused on automation

We replaced traditional scoring with a pass/fail system and used AI to detect visual issues — removing most of the manual effort.

A mobile-native interface built for fast, in-field use

Designed for speed and simplicity — quick taps, smart defaults, and intuitive gestures that matched how managers move through audits on site.

A seamless extension of the existing repository

We embedded the tool into the same daily workflow — keeping context intact and eliminating the need for retraining or new habits.

The Outcome

The new tool cut time-on-task by 75% — turning the slowest part of auditing into a fast interaction. It saw high satisfaction in testing, zero drop-off in the field, and full SLT support.

You're about to enter the realm of details. Below is the detailed process that led to the end product.

The Plan & Process

With no time to rethink the problem, I had to rethink the approach — moving from planned to rapid audits

I built a plan that worked within the limits. No new research, no new infrastructure, and no disruption to what was already working. That meant staying grounded in what we’d already learned, narrowing the scope to what we could actually influence, and designing a solution that could be tested, trusted, and shipped fast.

I ran a task analysis through the user journey created previously to reexamine behavior and reframe the problem

To reset the direction, I went back to the original research and ran a fresh task analysis using existing observation data. I wasn’t looking at screens — I was looking at what managers actually did. I broke down where time was spent and how attention shifted during audits. That helped me reframe the problem as a task-level issue, not a feature gap.

I used behavioral alignment mapping to surface gaps between design logic and real-world behavior

Task analysis showed where time and effort were going — but to understand why audits weren’t scaling, I needed to see where behavior and system design were misaligned. I used behavioral alignment mapping to trace how managers actually moved through audits versus how the system expected them to.

To narrow the scope, I first defined the rules of the project

With multiple friction points across the workflow, I needed a way to decide what was worth solving — and what wasn’t. I listed the non-negotiable constraints and used them to filter the solution space down to what was both high-impact and actually buildable.

Solve the task with the highest effort and lowest payoff

Prioritize where time is lost and value is unclear — not just complaints.

Focus on blockers that appear across users, not isolated cases

Target patterns that consistently break flow or delay action, not outliers.

Only consider solutions that fit existing system constraints

No backend changes. No new infrastructure. No added training.

Prioritize solutions that can be designed and shipped fast

Avoid cross-team dependency, heavy integration, or anything requiring deep validation.

Choose a problem that supports the broader narrative

The outcome had to demonstrate progress on AI — not just improve UX, but advance the story.

Findings & Key Insights

Task flow analysis revealed commenting and scoring as the step holding audits back

Even without new research, the patterns were hard to ignore. The system wasn’t failing because it lacked features — it was failing because it didn’t fit how managers actually worked in the field. What seemed like small friction points were symptoms of a deeper misalignment between behavior, environment, and design.

Key Insights

No area segmentation or evaluation structure — scoring had no anchor

Photos were taken opportunistically, there was no consistent basis for area segmentation or any guidance on how to evaluate areas.

Commenting wasn’t repeatable — managers had to invent it every time

Every comment was a fresh judgment. Managers had to decide what to note, how to word it, and what level of detail to include — all while walking the site.

The highest-effort tasks weren’t just slow — they were structurally unsupported

It required managers to manually fill in context the system didn’t provide. The result was inconsistent input, high cognitive load, and no path to scale.

There was no evaluation structure — so managers had to invent the logic for every audit, every time.

  • Managers took photos opportunistically — not in a fixed flow

  • Comments were based on individual judgment, not defined standards

  • Different managers commented on different aspects of the same area

  • Audit quality varied depending on what was noticed, not what was required

  • This variability made it hard to speed up or standardize the process

Shaping The Solution

I partnered with process and development engineers to translate findings into AI logic — and define where automation could integrate smoothly

With the core issues now clear — inconsistency, subjectivity, and lack of structure — I worked with process engineers to translate the findings into system logic. We didn’t start with “how do we use AI?” — we asked, “how can AI support the way managers think, decide, and move?” That meant narrowing the problem to what could be reliably automated, without replacing human judgment.

Engineers scoped where AI could help — implementing visual detection and basic Pass/Fail model

We kept the scope narrow: AI wouldn’t invent scores or write comments — it would analyze photos and suggest a binary outcome. If the majority of images in an area were clean, the system could recommend a pass. Everything else stayed in the manager’s hands. This kept the logic explainable, predictable, and easy to override.

The core need was structure — a prerequisite for the Pass/Fail model to work

Once we scoped what AI could do, the real problem came into focus: managers weren’t struggling to perform audits — they were struggling to structure them. Without defined segments, evaluation criteria, or repeatable flows, every audit was a blank slate. That’s what made scoring and commenting so slow — not the tools, but the lack of behavioral scaffolding underneath them. The AI couldn’t fix that on its own — the product needed to provide the structure first.

Process engineers turned “cleanliness” into scoring logic — breaking down scores by elements and photos

To replace inconsistent judgment with repeatable rules, process engineers built a model that broke each area down into its core elements — like floors, desks, or fixtures. Each element contributed to the total score, and each submitted photo represented a portion of that element’s value. For example, if floors made up 40% of the area score and a manager submitted four floor photos, each photo carried 10% weight. If one looked dirty, the area failed that portion. This logic gave audits a shared standard — and gave AI a framework to assess consistency.

Process logic shaped the UI rules — design had to reflect the logic

With AI rules and structure in place, the next challenge was making them usable. The interface had to show managers how each photo contributed to a pass/fail outcome, surface each element’s cleanliness clearly, and allow easy overrides — all without slowing them down. These needs shaped every screen that followed.

The Solution Design

AI logic framed the new audit rules so I designed the system around it

The goal wasn’t just to display AI output — it was to design an interface that reflected the new audit structure and kept managers moving. Every screen was shaped by the same principles behind the logic: clarity, speed, and adaptability.

Designed as an extension of the repository — we introduced a system-generated audit list

Built on simple logic: prioritize areas with incomplete data, longer gaps since last audit, or recent failures. We designed the audit suggestion list as a direct extension of the repository — allowing managers to begin their day with clear, structured guidance instead of relying on memory or habit. Each suggested area includes a breakdown of predefined elements, giving audits a consistent frame before any action is taken. For flexibility, we added a Scan QR option to support ad hoc or reactive audits without breaking the logic-driven flow.

Predefined area elements drove the flow — I designed it in a bottom sheet to keep it fast and flexible

To bring structure without slowing managers down, I designed the experience as a bottom sheet. This let managers stay in flow: no page changes, no loading delays, no disorientation. They could open an audit, snap photos, and close it out without ever leaving context. Each prompt ensured consistency; the interaction model ensured speed.

To keep audits rapid, I designed to minimize steps — feedback had to be inline and minimal

  • inline feedback — so managers don’t second-guess what just happened

  • no modal interruption — keeps the interaction fluid, not jarring

  • behavioral reinforcement — helps managers learn how the system sees cleanliness over time

The audit closed fast — so I designed temporary review in a bottom sheet for immediate clarity

  • Final Pass/Fail surfaced instantly — while detailed scoring processed in the background

  • Photos grouped by element — so managers could trace outcomes visually

  • Auto-generated comments — reinforced the system’s interpretation of cleanliness

  • Color indicators and layout — made it easy to skim and spot issues fast

  • Available in both mobile and web — accessible from the same suggestion list or repository

The Outcome

the new structure redefined quality control — 75% less time on task and 3x output in pilot testing at LaGuardia

  • 75% reduction in time on task — commenting and scoring became near-instant

  • 3x audits per manager — boosted output without extra effort

  • Pilot tested at LaGuardia Airport — validated in a high-traffic, high-pressure environment

  • No retraining needed — adoption was seamless across teams

  • Set a new standard — the flow became a company-wide model for AI-supported quality control

Reflection

This project showed how solutions can be reshaped — I had to reimagine the solution beyond the obvious and design for engineers along the users

This project pushed me to think beyond the expected solution. When planning fell apart, I had to design around behavior, not ideal workflows. I learned to work within tight boundaries — no new research, no major changes to infrastructure — and still build something impactful. Collaborating closely with engineers, I saw how structure, logic, and interaction design can work together to make AI usable, not just powerful. More than anything, it reminded me that great UX isn't just about features — it's about shaping systems that support real decisions in real time.

Thank You for Your Time!

Get in Touch