Finding what no one on the team encountered — because no one on the team used it like a stranger encountering it for the first time.
Spotting what developers never encountered because they never walked their own product like a real person on real hardware. No code. No automation. No AI. Just the shipped product, used the way your customers actually use it.
Solutions create arguments, scope creep, and consulting obligations. This service identifies friction and stops there. A white button on a white background is inarguable. What gets done about it is your domain.
Every finding in the report describes what happened — not what should have happened. No recommendations, no code suggestions, no redesigns. The report is a mirror, not a blueprint.
An outside auditor encountering your product for the first time doesn't know your codebase, your team structure, or which department owns what. Offering solutions from that position is guesswork dressed as consulting. You have the people and the context to fix what we find — we assume that from the start.
These are the kinds of things that live in shipped products and active operations — things a reviewer might notice but never articulate, or an employee might feel but never report. Drawn from UX audits and operational audits across different industries.
UX AUDIT — HYPOGEA (STEAM DEMO) · 62 FINDINGS IN 115 MINUTES
Selecting "Continue" after completing the demo loads a save state past the demo boundary — the developer removed the floor as a gate. The player falls infinitely through the void with no way to recover except force-quitting.
HighFive separate interactions display WASD as the control prompt when only a subset of those keys actually function. Not five individual bugs — one prompt system that doesn't query actual input bindings.
SystemicUX AUDIT — INSIDER TRADING (STEAM DEMO) · 41 FINDINGS IN A PARTIAL SESSION
First action allowed was clicking Trade, which spent all money buying shares. The tutorial only explained what Trade does after the action was already taken. Explanation follows consequence instead of preceding it.
HighBalance resets to $1,000 at the start of each week with no prior warning. Accumulated progress is silently wiped, undermining any motivation to earn money. No reason given for why the player should care about their balance if it vanishes.
MediumOPERATIONAL AUDIT — IMAGEFIRST LINEN TECH (HEALTHCARE) · 102 FINDINGS IN 30 DAYS
The badge issued was from a recently transferred employee — still active, belonging to someone of a completely different appearance. An employee of entirely different height and build entered a hospital daily using another person's badge. When raised, the response was "work it up the chain." It took two weeks before a proper badge was issued — completed in under two hours once the right person noticed.
HighThe interview stated 2–3 weeks of significant training. Actual training was less than 6 days total. Solo shifts began immediately after, with no structured review period — just one follow-behind to confirm carts were acceptable.
MediumFindings drawn from audits of Hypogea (Steam), Insider Trading (Steam), and ImageFirst (Olathe Hospital) — used to demonstrate, not to disparage. Severity labels shown here reflect internal prioritization and are not necessarily included in client reports.
Reports reference a structured taxonomy built from 16 months of research across technology, institutions, and daily life. Pattern names are visible in each report. Full descriptions and trigger mechanics are proprietary.
No guides. No developer documentation. No walkthroughs. The product is used the way your next customer will use it.
Testing occurs under authentic consumer entry conditions: real consumer hardware, default settings, no prior optimization or familiarity, and typical connectivity. This reflects true first-time and real-world usage scenarios.
Every interaction is approached as a first-time user with no prior knowledge. No assumptions about intended behavior. If it isn't communicated by the product, it doesn't exist.
Findings are documented as observational facts — what happened, where, and under what conditions. Each finding is mapped to an anti-pattern in the catalog.
The report is compiled and structured by category. Severity is assessed internally to guide audit focus and prioritization — it may or may not appear in the final deliverable. No follow-up call. No sales pitch. The findings speak for themselves.
The people who made the product are the last people who can encounter it fresh. This is the layer between QA, automated testing, and what a real person actually experiences.
Every template, page, and process used by this service was tested against its own anti-pattern catalog before shipping. The methodology, the report structure, and this website were held to the same standard applied to clients.
The audit methodology is rooted in Lean Six Sigma — a discipline born in manufacturing and refined across healthcare, logistics, and operations over decades. Where traditional UX review relies on instinct and heuristic checklists, Lean Six Sigma demands something harder: measurable observation, structured categorization, and repeatable process. The auditor holds an active Lean Six Sigma certification.
Lean Six Sigma matters here because UX problems are process problems. Every friction point a user encounters is a defect in a workflow — a point where the expected path and the actual path diverge. Lean principles eliminate waste: unnecessary steps, redundant inputs, information that arrives too late to be useful. Six Sigma provides the measurement framework: defining what constitutes a defect, documenting the conditions that produce it, and categorizing it against known failure modes so it can be addressed systematically rather than anecdotally.
This combination is what separates a structured audit from a list of complaints. Findings are not opinions — they are documented observations mapped to a taxonomy of anti-patterns, each with defined trigger conditions and contextual severity. The same rigor that reduces defect rates on a production line applies directly to reducing friction in a shipped product. The discipline scales because it was built to.
This isn't theoretical. UX audits, operational audits, and technology audits have all been delivered under this framework — from solo-developed Steam games producing 41 findings in a partial session, to a 30-day embedded operational audit inside a healthcare linen facility producing 102 documented findings, to restaurant walk-ins evaluated on service flow, menu friction, and front-of-house process gaps. The methodology applies wherever a person interacts with a system someone else designed.
Each audit report includes a verification code unique to that engagement. The code embeds an abbreviation of the business name and audit date inside randomized characters — verifiable by the recipient, meaningless to anyone else.
If you received a report, the code on it was generated for you. Enter it below to verify.
Enter your verification code below to view your full findings.