Accessibility portfolio

Real findings from real products, tested the way blind users actually experience them.

Each case study documents what I found, what broke, and what needs to happen next. These are not hypothetical audits. They are hands-on evaluations of public websites and apps using VoiceOver, keyboard navigation, and manual inspection. Below the audits, you will also find engineering projects where I built accessibility solutions for professional desktop software that had no VoiceOver support at all. If you are hiring, this page is here so you can judge the work directly: how I identify blockers, describe impact, and turn what I notice into something another person can actually use.

How to read this page

The point is not just that I can spot issues. It is whether I can explain the real user impact and the next move clearly.

Hiring managers and collaborators should be able to skim a case study and answer a few practical questions quickly: Can this person identify the actual blocker? Can they tell the difference between something irritating and something that stops the experience cold? Can engineering use the writeup without losing the human context?

What to look for

  • Clear statement of the user task that failed
  • Specific reproduction details instead of vague commentary
  • Severity grounded in user impact, not drama
  • Fix direction written for teams that need to ship

Case studies

Detailed accessibility reviews of real products.

Engineering projects

Accessibility solutions I built for software that had no VoiceOver support at all.

These are not audits. These are working tools I designed and engineered using macOS Accessibility APIs, machine learning, and platform-level integration to make inaccessible professional software usable with VoiceOver.

How I work

What I pay attention to when I evaluate an experience.

Screen reader flow

I look for whether the experience stays understandable and operable in sequence, not just whether individual controls have labels.

Keyboard flow

I care about focus order, interaction states, trap conditions, and whether a keyboard user can complete the task without guesswork.

Structure and meaning

Headings, landmarks, buttons, links, form labels, error handling, and state changes all need to communicate the right thing at the right time.

Finding format

Findings written for people who need to fix things, not admire a report.

When I document accessibility issues, I focus on what the user was trying to do, what failed, how to reproduce it, and what kind of fix direction the team needs next. I want the writeup to stay precise without flattening the actual experience.

What each finding includes

  • Issue title with clear scope
  • Affected users and user impact
  • Reproducible steps and observed behavior
  • Relevant standards reference when needed
  • Plain-language remediation direction

Tools and methods

The point is not to name every tool. It is to show how I use them to evaluate real user experience.

Assistive technology

VoiceOver, JAWS, and NVDA to evaluate whether a flow stays understandable, navigable, and complete for screen reader users.

Manual interaction testing

Keyboard-only review to check focus order, interaction states, dialogs, forms, and error recovery alongside screen reader testing.

Reporting and remediation

Findings in plain language with reproducible steps, user impact, and practical remediation guidance teams can actually work from.

Next step

If you want to hire me, the case studies should show you how I work. If you want consulting, go straight to the services page.