This one secret will save you $100,000 on accessibility

I’ve historically been very critical of the various Business Case arguments for accessibility given their lack of actual evidence. There’s one business case argument that I think is rock solid: The cost of remediation.

The cost of remediation actually has two faces: The actual time-on-task it takes to fix issues, of course, but also the lost opportunity dollars that come from development staff time being diverted to bug fixes rather than being spent on new features. We’ll skip past the opportunity cost for now because the actual remediation cost is enough to get our point across anyway.

average cost per defect = ((number of devs * number of hours ) * fully loaded dev cost per hour) / total bugs to be fixed

The above gives us the average cost per defect. It is mostly dependent on two factors: time to fix the average bug and the number of bugs fixed. The fewer bugs fixed will raise the cost-per-bug, as the typical case is that developers can (and will) be able to fix multiple bugs of the same type rather quickly. But there’s no getting away from the fact that the more bugs to fix, the more money it will cost.

So what’s the cost? That depends a lot on how accessible you are starting off! Across 800,000 tested URLs, has logged an average of 42 accessibility issues per page. This number is statistically significant and automatically-testable accessibility issues don’t make up the entirety of possible issues. This indicates that the full sitewide remediation of all issues could be very expensive and time consuming. In fact, the $100,000 number in this post’s title isn’t made-up. It is actually an estimate of the cost to fix bugs on a project I’ve worked on.

Of course, there’s the option of not fixing the bugs. There may be instances where, through effective prioritization, we decide not to fix some issues. The overall truth remains that avoiding the bugs in the first place is by far the cheapest. The ROI argument here is easy: how many bugs we can/ should avoid, what are their costs to fix and – while we’re at it – what amount of Risk are we avoiding?

Doing it right the first time has instant ROI.

If you are interested in learning about the next generation in Web Accessibility Testing, give a try.
If you or your organization need help with accessibility consulting, strategy, or accessible web development, email me directly at or call me at +1 443-875-7343. Download Resume [MS Word]


  • Posted September 29, 2014 at 9:37 am   Permalink

    Additionnally, the effect on morale can be devastating. For a developer, a long list of bugs (a11y-related or not) screams into your brains that you’re not as good as you thought you were.
    That’s why a blunt or agressive tone can ruin a good testing report…
    Of course, the costs incurred by a feeling of shame or resentment can not be calculated. But I bet that the figures must be scary.

  • asabaylus
    Posted October 7, 2014 at 10:20 pm   Permalink

    @Oliver I couldn’t agree more. I like to think of myself as delivering great UX. So it’s very demoralizing to be on the receiving end of an accessibility audit which shows, often painfully, where we as designers and developers have failed our users and employers.

    I recently overheard an exchange which helps frame this sentiment. A friend of mine and brilliant engineer I sometimes work with; created a relatively trivial feature which was committed to master. This code change broke our CI build pipeline which is a big deal. There was a static code analysis step introduced into our builds, only my friend didn’t know about it. Worse yet, none of us on the team could run the static analysis locally. So the first time we got to see if we passed was after it was already too late.

    It is entirely unfair to expect engineers to be responsible for delivering code which adheres to any set of standards but deny them the opportunity to validate their code before it gets committed.

    We would never expect this of static code analysis, so why would we expect different of accessibility?

    It gets even worse, because only in the a11y world you expect engineers to use FireEyes (it’s free!) and QA to use SortSite. Then think the results would be identical.

    There is a misconception that all tools will create the same results and thats simply not true. You are setting yourself and your company up for failure if your engineers lint using one tool and your QA validates with a different tool.

    Full disclosure I’m working with Karl to build Tenon 🙂

One Trackback

Post a Comment