Web Accessibility Testing: Do Automatic Testing First

Ask any ten accessibility people how they test for accessibility and you’re bound to get a different answer from each one . Some people test with JAWS or other assistive technologies and, if they can use the site, they “pass” it. Some people subject the site to a series of ad hoc tests for things they deem important. Some people use a checklist. Others use complicated methodologies to ensure complete and thorough coverage. On some level, each method has its merits. In the grand scheme, so long as we’re making progress toward a more accessible Web, I think we’re doing good. I do, however, have very strong feelings that how people test could be improved in ways that make the testing more efficient, less intrusive on projects, and deliver better impact for end users. One thing we should rethink is the role and effective use of automated testing.

Old & Busted Opinions on Automated Web Accessibility Testing

Typically, approaches to automated accessibility testing fall under three categories:

  1. “Automated testing sucks and I won’t do it”
  2. “Automated testing rules and that’s all I need to do”
  3. “Automated testing is a valuable component to my all-inclusive audit methodology”

Those who feel automated testing sucks are partly correct. In the early days of web accessibility, automated tools were dumb. They were very prone to false positives and, as the web has evolved, such first-generation tools are incapable of handling complicated workflows and interfaces which makes extensive use of client-side scripting. I’ll discuss this more in a future blog post but the Cliff’s Notes version is that if your tool isn’t testing the DOM, your tool isn’t testing the right thing and you should find a new one.

Automated testing very definitely has a good place in any organization that does web development. The efficiency afforded by the use of an automated accessibility testing tool cannot be matched. Doing automated testing becomes more vital the more front end development you’re doing and the more content you’re creating. There becomes a point at which the accuracy of human effort cannot overcome the efficiency of a quality testing tool. But that doesn’t mean you can stop at automated testing. Because so much about accessibility is subjective, no automated testing tool in the world can provide you with enough data to claim your system is accessible. Machine testing is valuable but not enough.

The sensible response to the above seems to be to combine automated testing with other testing methods such as manual code review, assistive technology testing, use case testing, or even usability testing. In fact, this has long been my own suggestion for a very long time. As an accessibility consultant, it is hard to resist the urge to deliver a client a large impressive report filled with extensive findings about how messed up their system is. A former boss of mine called this “plop factor”. The plop factor is how impressive the “plop” sound is when the hard copy report lands on the client’s desk.

The New Hotness: Do Automatic Testing First

I’d like to make an alternative suggestion. Skip the manual testing. Skip the use case testing. Skip the usability testing. Do automatic testing first.

When auditing a site, do a first round with automatic testing only

This is possibly the most radical departure from the conventional line of thinking most people have when it comes to automatic testing. As I said, some people don’t do it at all and others will use it as part of a much larger effort. I believe that instead, we should do automatic testing as a first round of testing. Most of my colleagues and peers are probably hyperventilating right now – especially those envisioning the dollar signs disappearing from their huge impressive high-plop-factor reports. But here’s the thing: This definitely shouldn’t be the end of the engagement with the client. Instead, the first round of testing should just be automatic testing. Your report should contain detailed guidance on how to remediate the problems found. The client’s development team should fix all those problems and then you should do a regression audit that includes both automated testing and manual testing. I would also save the use case testing/ or and usability testing for a final iteration. These are all important types of testing and should be done, but here’s why I suggest the automatic-testing-first approach:

  1. You should never pay a human to find errors that can be found through automated testing.
  2. Manual testing will close the gaps on what automated testing couldn’t find
  3. You should never uncover errors in use case or usability testing that couldn’t be found by automated and manual testing

This is good customer service. By taking this iterative approach, you’re delivering value to your customers and helping them become compliant faster and cheaper.

Agile teams: Build automatic accessibility tests in your Definition of Done

In a proper Scrum environment, developers test their own work. In some teams, the tests and the code are written at the same time. Accessibility is a bit of a different situation, primarily because the conformance criteria is often so subjective. There are, however, a large and important subset of accessibility best practices which can be tested for automatically. Developers in Agile environments should subject their code to these tests prior to calling a task complete. QA engineers in Agile shops should never find any automatically-testable error because the developers should take care of that stuff first. If they are, then the User Story isn’t complete.

Test nightly builds for accessibility

Few people realize this, but some of the enterprise-class automated web accessibility testing tools can be used as a web service. Different tools do different things (which, due to ethical reasons, I may be unable to comment on in too much depth. Sorry) but one way you can take advantage of the web services is to submit requests to the service and get back results. A really compelling idea for this would be to automatically test nightly builds so that all code submitted to version control gets tested.

Content teams: Test for accessibility before publishing

Content creators are typically not skilled in web development and often only have enough technical knowledge to do their job, which is to get content up on a site. They are not developers and therefore are often the web team members who know the least about web accessibility. As a consequence of this ignorance and the amount of content they create, content creators can be the source of a significant volume of accessibility errors on a site.  The workflow of the content creators should include automatic testing to ensure no errors reside in the new content they’re about to publish. The tests should be limited to testing the content only.

Do definitive accessibility tests only

In the above scenarios, you should configure your automatic testing tool so that the only things it tests for are those things which can be definitively determined to be pass/fail. In any given tool, some of the test results will come up with items they flag as “Warnings” or “Manual Verification”. Figure out how to turn these tests off. If your tool doesn’t offer this degree of flexibility, find a new one that can. The reason for this recommendation is that you need to focus your efforts on doing things efficiently. These “warning” level errors are often incorrect or require too much subjective interpretation to be an efficient use of time.

You’re Not Done

The thing to keep in mind when doing automatic testing is that you are not done. If you’re getting clean reports from whatever automatic tool you use, great. Pat yourselves on the back, because you’re doing better than the vast majority of websites out there. Regardless, you’re still not done. As I’ve said above, even the best automatic testing tool provides incomplete coverage. Anyone who gives you an impression that this is not the case should be treated with serious suspicion. Instead, you should function under the understanding that more work is to be done before you can really claim your site is accessible. Specifically, you need to include steps in your process to include manual code review, assistive technology testing, and use case testing at various stages of the development process.

Iterate and expand scope

One of the biggest barriers to adoption of accessibility, in my experience, has been the impression that accessibility is nebulous and intrusive. Using the approaches I’ve outlined above, you can build processes into your SDLC and publishing workflows that allow accessibility to have minimal impact on your business. By initially testing for a subset of high impact issues, you can get quick wins that help minimize the pain experienced when an organization is new to accessibility. Then you can build on those successes by including a few more of the more subjective things and/ or including some manual testing. Increasing the scope gradually and deliberately will help minimize the perceived impact.

If you are interested in learning about the next generation in Web Accessibility Testing, sign up for the release of Tenon.io
If you or your organization need help with accessibility consulting, strategy, or accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343. Download Resume [MS Word]

2 Comments

  • Posted February 3, 2012 at 10:38 am   Permalink

    As usual, a solid piece of argumented wisdom, thanks for that.
    So far, I have been a member of the crowd that hasn’t been satisfied yet by the available tools. Either for their cost, their perceived reliability (or lack of), or their perceived difficulty of configuration for immediate, yet complete testing. The combination of those has always led me to choose the manual way, even more with my experience growing, which makes me faster a testing than I used to be. Plus, my now-colleague might be the fastest human on Earth when it comes to AA-level accessibility testing, so we found no system on par with the all-manual process. Again, so far.
    But we dream of the day when we change our minds, and go for the automation. And I guess, one of the key factor is being proficient enough with a given tool to be able to tweak it reliably, and quickly, to adapt it to the project’s needs. Which would justify additional costs compensated by saved time.
    As for losing business (i.e. manual testing time, sold at expert-level rates): I don’t believe anyone should fear that.
    Firstly because the added value of an expert is precisely to make the process reliable, which implies knowing the limits and stay within that perimeter, as you rightly mention; so expertise isn’t made redundant, it’s simply put at better use.
    Secondly, because if you can do more audits in the same time, at the end of the day, you serve more clients… possibly for a lower price, which makes you more competitive, yet you have preserved your income by converting more leads.
    And the final reason, which can rapidly become compelling: who on earth can have fun manually checking the presence of image alternatives, background colors, and validation errors? Duh. Not me.
    But, as I said, I still need to find my tool of choice… looking forward to discussing this with you at CSUN (hint, hint).

  • mgifford
    Posted February 4, 2012 at 10:36 am   Permalink

    Wish I could make it to CSUN this year. I’m going to post a link to this post on my blog from a related post.

    I do think that the use of automated tools can really help everyone focus their really limited time on new problems rather than chasing around fixing old ones.

    I definitely like the idea of nightly tests & exposing content teams to easy ways to evaluate the content’s accessibility before it’s published. Simple tools like CSS Holmes can really make a very simple evaluation to any HTML site.

7 Trackbacks

Post a Comment