
Accessibility consultant. WCAG specialist. Digital inclusion expert. These titles get used interchangeably and none of them are regulated, which means anyone can put them on a CV or a Upwork profile and there’s no body checking whether they actually know anything.
I work as one, so I have an obvious interest in you understanding the difference. But I’ve also seen enough of the market to give you a genuinely useful steer.
“Consultant” is a broader term than “specialist”
When someone calls themselves an accessibility consultant, they might mean they do strategic advisory work, helping organisations build accessibility programmes, write policies, set roadmaps.
That’s useful but it’s different from someone who rolls up their sleeves and tests your product against WCAG criteria.
When someone calls themselves a WCAG specialist, they’re usually pointing at technical expertise - auditing, code review, assistive technology testing.
A lot of people do both. I do both. But when you’re hiring, it helps to be clear about which one you actually need right now, because the deliverables look very different.
Do you need someone to tell you that you have accessibility problems? That’s an audit.
Do you need someone to tell you how your organisation should think about accessibility over the next year? That’s strategic consultancy.
Do you need someone to sit alongside your Developers and review pull requests?
That’s embedded specialist work.
The questions that actually filter well
What does your audit process look like, end to end?
You want to hear them talk about manual testing - keyboard navigation, screen readers, zoom testing. If they jump straight to automated tools, probe further.
Automated scans catch maybe a 30-50% of real issues.
Have you worked with a team at roughly our level of maturity?
Someone who’s spent their career on enterprise accessibility programmes might not be the best fit for a startup that’s never thought about this before. And vice versa. Fit matters.
Can you show me a finding from a previous report?
Just one finding. Well-written findings explain what’s broken, why it matters to specific users, which WCAG criterion it fails, and what the fix looks like. A weak finding says “colour contrast fails 1.4.3.” A strong one explains what happens to a low-vision user, references the specific element, and gives the developer something they can action in an hour.
What do you do when you’re not sure?
This one reveals a lot. Accessibility has grey areas- criteria that are genuinely contested, or situations where the spec doesn’t cleanly resolve. An honest specialist will tell you. Someone performing confidence on everything should make you mildly suspicious.
What a red flag looks like
The most common one, leading with a tool rather than a methodology. “I use axe” isn’t a testing approach, it’s a starting point. Ask what happens after the tool runs.
Second most common, framing accessibility as a one-time fix. You get the audit, you fix the findings, you’re done. That’s not how it works. A product changes. New features get built. Old fixes regress. Someone selling you a one-off certificate and walking away isn’t setting you up well.
Third: recommending an overlay. If someone suggests a plugin overlay or toolbar as a serious remediation strategy, find someone else.
One practical thing
Before committing to any significant engagement, give them a small paid test. Pick a real page from your site, not the homepage, pick something a bit complex — and ask them to write up what they find. You’ll know pretty quickly whether the person you’re looking at is the real thing.
I work with organisations on accessibility audits, design reviews, and helping teams build accessibly from the start. If that sounds like what you need, my contact page has more detail.