Karl Groves

Tech Accessibility Consultant
  • Web
  • Mobile
  • Software
  • Hardware
  • Policy
Telephone
+1 443.875.7343
Email
karl@tenon.io
Twitter
@karlgroves

5 Approaches to Dealing with 3rd party (in)accessibility

I have an embarrassing confession to make. Tenon has accessibility problems. Some of them are our fault. Some of them are based on conscious decisions. Some of them are due to our use of 3rd party content and controls. But regardless of where the issues come from, once they’re on our system they’re ours to deal with. Let’s talk about 5 ways you can deal with inaccessible code.

Before I continue, I want to point out one option that isn’t listed below: “Leave it as-is”. Historically, legal cases around inaccessible ICT have mostly exempted third party content, but that often depends on the nature and source of the third party content. Platform accessibility issues are never exempted as far as I know. So if you’re an e-retailer being threatened with a lawsuit, you can’t deflect liability over to your vendor if their platform is the basis for your entire presence. As a general rule: that which is hosted on a domain that you own is your responsibility.

If it is already on your system

Remove it

One of the biggest sources of accessibility issues on the Tenon website was the “McAfee Trustmark” we had in the footer of each page. The intent for having it there was for visitors to know that we take security seriously. Unfortunately, it doesn’t appear that the folks at McAfee took accessibility seriously. The service itself is pretty cool but at $149 a month, it didn’t add much value. Add the non-existent value to the accessibility issues, and it was an easy decision for us. We took it off and removed a pile of accessibility issues. If a feature doesn’t have a direct user benefit and has accessibility issues, the decision is easy: dump it and move on.

Replace it

Sometimes you can’t just decide to do away with a feature altogether. When it comes to widgets and add-ons to sites based on WordPress or Drupal, there are often alternatives available that you can try. Another case where I’ve seen this work is with features that allow you to add your company’s job postings. Often the best approach is just find a replacement that meets your business goals in a more accessible manner. This approach obviously means you’ll need to invest a fair amount of time searching for an accessible alternative but let’s face it, that’s an exercise you should’ve done the first time around.

Fork it and fix it

On the homepage of Tenon, we use Code Mirror. We discovered some issues with keyboard accessibility and created our own fork to fix them. Some issues remain that we want to fix before issuing a pull request. For now, we have at least made some improvements and are planning on doing more. At times this approach may help deal with immediate issues but it can also lock you into your own forked version. The absolute best approach in this case is to issue a pull request with your improvements. This not only helps you and your users but also helps others who are also using the same product.

Improve it after the fact

What if what you’re using isn’t open source? What if your vendor has no plans to improve their product? It may be possible to add JavaScript to your site that fixes existing problems in code you don’t own. This is actually something I demonstrate at The Mother Effing Tool Confuser. At a high level, if you need a quick fix for a known issue you may be able to add some JavaScript to fix the issue. This is the basic principle behind Deque’s Amaze product. SSB BART Group has a similar product, and Simply Accessible does custom “overlay” work for customers. One of the big downsides of this approach is that it is incredibly brittle. If you make any changes to the underlying code, you’ll undo the “fix”. Simply Accessible are very transparent in disclosing this for customers and up front about the fact that this is a temporary approach, which I think is awesome. This is definitely an effective short term approach.

Push on the Vendor

Over at the Tenon blog, I documented this a little bit when I discussed build vs. buy is even harder when you care about accessibility. In our case, Intercom.io is an extremely valuable service for use to help our customers. At the same time, we recognize that providing excellent customer service in real time is an important differentiator. No other accessibility tool vendor allows you to talk to support staff directly in real time. A large number of customers have told us that this level of support solidified their decision to go with Tenon. Intercom has had a direct ROI for us. Nevertheless, we can’t just look the other way on the product’s accessibility issues. We’ve communicated our concerns with the vendor and will continue to do so until either they address accessibility or there’s a suitable accessible alternative. We’ve even toyed with the idea of making our own.

Avoiding the problem

Of course not choosing the inaccessible product is the best approach. Unfortunately, most people don’t realize the product is inaccessible until it is too late. One of the ways that people in the public sector have tried to deal with this is by requiring a document like a VPAT or GPAT. Vendor statements on accessibility are often laughably poor and may not even exist. This leaves you with only one choice: Test it yourself or have someone else test it. Yes, I’m suggesting that you test someone else’s product before you choose it – the same as you would to verify that it meets your business needs, privacy needs, and security needs.

Determining the level of effort you expend on doing this testing is proportional to your exposure to risk and how much impact the specific product will have on that risk. In many cases, you can get a good idea of how accessible the product is by doing only a handful of tests. The results of such testing will be less-than-scientific, but will definitely give you a strong understanding of whether you should bother moving forward with that product or look elsewhere. Once you’ve narrowed down your choices, you can decide whether a full-blown audit is warranted.

It is impossible – and a horrible business decision – to roll your own code for every system you use and every piece of functionality on that system. A lot of things go into deciding what 3rd party product you should use. Accessibility needs to be one of those things you consider strongly, especially in the United States where litigation is happening at an unprecedented pace. There’s no such thing as “perfect” accessibility, but hopefully this post helps provide guidance in choosing the right product and how to deal with any lingering accessibility issues once the choice has been made.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Accessibility Lawsuits, Trolls, and Scare Tactics

There has been a lot of discussions in Web Accessibility circles around “ADA Trolls” this year. The massive uptick in web-accessibility related lawsuits that began around October 2015 is certainly a new trend in this space. While lawsuits around web accessibility are certainly not new, the frequency and volume we’ve seen in 2016 definitely is.

A few days ago, 60-Minutes aired a segment on what they refer to as “Drive-by Lawsuits“. In reaction, Lainey Feingold framed the segment on 60-Minutes as a “Hit Piece” – and rightly so, in my opinion, because it gave the impression that people with disabilities are being used as dupes for unscrupulous lawyers.

Not even a few days old, the 60-minutes segment is already being used to argue against making websites accessible

There are a lot of scum bag lawyers taking advantage of these laws and are actively going after small businesses that are “not in compliance”.

David Bekhour’s response to the 60-Minute segment Anderson Cooper: What Were You Thinking? is a must-read, but I’d like to also throw a few brief thoughts into the mix.

The Objective Facts about Drive-by Lawsuits and Trolling

In the United States, a few dozen ADA-related lawsuits are filed in US Federal Courts every day. (Search PACER for Case Type: 446 American with Disabilities and 42:12101 Americans w/ Disabilities Act (ADA)) for a list. If you were to download a full list of all of those lawsuits, you’d begin to see a handful of names appear repeatedly throughout the list. Some of those repeat names will be Plaintiffs’ attorneys. Some of those repeat names will be the plaintiffs themselves.

Making a judgment about which of the law firms listed are engaging in “drive-by” lawsuits simply by their frequencey of appearance would be making a hasty assumption. Law firms or even individual lawyers within law firms often specialize in specific practice areas. For instance, when my wife and I needed a lawyer to deal with our daughter’s IEP stuff, we went to a lawyer who specialized in that area. It would be silly for me to hire that same lawyer for doing contract negotiation stuff for Tenon, because that’s not her area of expertise. Similarly, there are lawyers who specialize in Disability Rights. There’s even a Disability Rights Bar Association:

The Disability Rights Bar Association (DRBA) was started by a group of disability counsel, law professors, legal nonprofits and advocacy groups who share a commitment to effective legal representation of individuals with disabilities.

The lawyers listed in PACER as having filed the suits are likely to simply be doing what lawyers do for any other type of lawsuit. Lawyers, at least in my experience, are generally not quick to file suit. Some are, of course, but usually there’s a lot of other stuff that has happened before the suit was filed. Strategically-speaking trying to resolve a conflict before filing the suit is a much smarter approach to victory. Simply put: there’s no reason to believe that a plaintiff’s lawyer is engaged in “drive-by lawsuits” simply because a firm that specializes in disability rights has filed a lawsuit.

When it comes to the named plaintiffs themselves, that might be trolling. Looking at the list of lawsuits, you will see certain plaintiff’s names very frequently. Some law firms file suit on behalf of the same plaintiffs over and over. The 60-Minutes piece discusses some of this, and I have to admit that this definitely gives the impression of unscrupulousness.

The missing pieces to truly understanding the situation

In other words, even if we assume that each Defendant in the ADA suits was unique, less than 0.5% of retail businesses or restaurants were sued in 2016. They were sued by 0.003% of the population of people with disabilities. No matter how you use the numbers above, there’s no support for any claim of an epidemic of ADA lawsuits sweeping the country. Yes, there are some people who are clearly trolls. I think it is safe to say that the folks who’ve filed > 50 lawsuits this year are probably trolling. They account for 1.8% of the plaintiffs. On that basis alone, 60-Minutes should be ashamed for being sensationalist in its reporting.

I realize the above doesn’t address the epidemic of threats and shakedown letters around Web Accessibility over the past year. No data exists to provide exact numbers on this activity, but I assume less than 1000 of those demand letters have been sent. Even if we doubled that to 2000, that’s still around 0.01% of the websites in the United States.

Accessibility Matters

Since my site is mostly about web accessibility: If you don’t have to use the Web with a screen reader or without a mouse, you really don’t know what it is like for people with certain disabilities to try use most websites. If you don’t believe what I say about the difficulties, unplug your mouse for the day. Still not convinced? At Tenon, we’ve been inspired to do some investigating into error patterns and will be sharing some of our findings at CSUN 2017. Nater Kane and I will be presenting on Data-Mining Accessibility: Research into technologies to determine risk and Job van Achterberg will discuss Automatica11y: trends, patterns, predictions in audit tooling and data based on data from the world’s top websites. Below are just a taste of the findings that will be shared at CSUN:

  1. There are nearly 70 automatically detectable issues on each page of the web – before accounting for contrast issues
  2. Color contrast issues are, by far, the most pervasive issues
  3. 28% of all images on the web have no alt attribute at all.
  4. Another 15% of alt attribute values are completely worthless things like “graphic”
  5. role attributes, when used, are equally likely to be used wrong – for arbitrary string values that have no basis at all in the ARIA specification
  6. 81% of buttons on the web have no useful text for their accessible name

Beyond raw numbers, it is important to remember that this is a civil rights issue and accessibility both in the physical world and on the Web is, in a word: abysmal. There is no epidemic of people with disabilities going from website to website suing people. The epidemic is that we’re 19 years after the formation of the Web Accessibility Initiative and yet developers at the largest websites in the world can’t figure out the alt attribute or how to put meaningful text in buttons. Because if you do get sued, nobody will have pity on you for doing subpar work.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

5 talks in 6 days in 3 countries starting this Saturday

Holy cow. I can’t believe it. Tomorrow I start a trip that has been several weeks in the making. Now that it is around the corner I’m both excited and anxious! Even more, I’m truly humbled that such a trip is even possible.

Each of these events are open to the public. If you’re in one of these cities, I hope to see you!

  1. November 5, 2016 – Toronto, Canada (9am – 5pm)
    Accessibility Camp Toronto (#a11yTO)
    OCAD University – 100 McCaul Street, Toronto Canada
  2. November 7, 2016 – Berlin, Germany (Noon – 5pm)
    Accessibility Club #4
    Contentful – Ritterstraße 12 10969 Berlin Germany
  3. November 8, 2016 – Amsterdam, Netherlands (6pm – 9pm-ish)
    Fronteers Monthly Meetup
    Desmet Studio’s – Plantage Middenlaan 4A 1018 DD Amsterdam Netherlands
  4. November 9, 2016 – Düsseldorf, Germany (5pm – 8pm)
    Trivago Academy
    Trivago – Bennigsen-Platz 1 40474 Düsseldorf Germany
  5. November 10, 2016 – Nürnberg, Germany (Noon – 5pm)
    Accessibility Club #5
    tollwerk – Klingenhofstraße 5 90411 Nürnberg Germany

Some tough love: Stop the excuses, already.

Over a year ago, Dale Cruse called me “militant” about accessibility. I know I use strong language at times, but I actively try to have a softer touch. I think he meant it kindly anyway, but I worried a little. “Do I come off too strong?” I wondered. I get a lot of compliments on my blog, so I felt conflicted. Could I be alienating people, too? I think about this kind of thing a lot, actually. But maybe the truth is that others in the accessibility field aren’t confrontational enough.

Today, UC Berkeley posted A statement on online course content and accessibility which contains the following paragraph:

In many cases the requirements proposed by the department would require the university to implement extremely expensive measures to continue to make these resources available to the public for free. We believe that in a time of substantial budget deficits and shrinking state financial support, our first obligation is to use our limited resources to support our enrolled students. Therefore, we must strongly consider the unenviable option of whether to remove content from public access.

The statement by UC Berkeley’s Public Affairs department is, in a word: Bullshit. It is bullshit aimed at making it seem as though accessibility is burdensome and that somehow accessibility requirements are vague.

They say, early in their statement: “Despite the absence of clear regulatory guidance, we have attempted to maximize the accessibility of free, online content that we have made available to the public.” (emphasis mine).

The implication that accessibility for online content is a new topic in higher education – and therefore something UC Berkeley didn’t know about – is a fabrication. As many of you know, I maintain a list of web accessibility related litigation and settlements. The first time the DoJ sued a Higher Education institution was in 2003. NFB sued a handful of schools in 2009. The DoJ and US Dept. of Education’s OCR have been very active over the last few years – enough so that it is a very frequent topic of conversation at accessibility conferences.

If for some reason UC Berkeley needs to “implement extremely expensive measures” to retroactively make this online content accessible after the fact it is because they didn’t give a shit about accessibility from the beginning. That’s their fault, plain and simple, and has nothing to do with any sort of new requirements.

It is time for some tough love

The first version of WCAG came out in 1998. WCAG 2.0 came out in 2008. If you do work for the US Federal Government, Section 508 came out in 1998 and the new version is due soon. At this point, this isn’t new. These requirements aren’t new. The methods of achieving compliance isn’t new. The core Web technologies necessary to make web content accessible are not new. The needs of users with disabilities aren’t new. If any of these topics are new to you, that’s fine. Fucking learn them.

Do you know what union electricians do whenever a new electrical code comes out? They hold classes and learn them. Do you know what accountants do when new tax laws come out? They go to classes and learn them? Why? Because knowing what the hell you’re doing is important to doing a good job.

Whether you like it or not, your company or organization is required by law – explicitly stated or not – to provide ICT products and services that are accessible to people with disabilities. If your work is not accessible, it does not meet the necessary quality standards. Learn how to do it and stop making excuses.

Get your VPAT/ GPAT in a hurry

I get several emails per month from Government Contractors looking for help writing a VPAT or GPAT document. Chances are this is the result of good SEO on my blog post Why a Third Party Should Prepare Your VPAT/GPAT. Unfortunately a lot of these contacts follow this pattern:

Hi Karl, I’m from XYZPDQ Corp. and we’re getting ready to submit an RFP response to the ABC Agency and they’re asking us for a VPAT/ GPAT. The responses are due in 3 days. Can you help us?

I got 3 such emails this past week, one of which on Sunday, with a due date for their proposal being, apparently, that evening.

The short answer to this is “No”.

There’s another point that needs to be made, however, about why this is a really really bad idea. If someone tells you they can turn around a VPAT or GPAT in that amount of time, RUN. They are going to risk not only the contract but possibly a load more.

Why turning around a VPAT/ GPAT in a hurry is a horrible idea

Everything I said in the original posting is still true.

  • Preparing a VPAT/ GPAT requires extensive knowledge of Section 508
  • The information in the VPAT must be based on comprehensive review
  • The information in the VPAT should be as objective and unbiased as possible
  • The VPAT becomes part of the procurement

First, the kind of person with enough skills to fill out a VPAT/ GPAT correctly does not have the time to interrupt whatever they’re doing to write up a VPAT/ GPAT. This is especially true since accurately writing the document needs to be based on a comprehensive review. The time to write the VPAT/ GPAT itself might only be a day (or less) but that’s only after the comprehensive review. Depending on the nature of the product, that comprehensive review may take days or even weeks.

The last point in the list above is the most important one. When you submit the VPAT/ GPAT as part of your proposal, it becomes part of it. It should tell you something that when VPATs are filled out at companies like Oracle, they are all reviewed by lawyers before they’re able to be used.

Many people believe that Section 508 requires the government to ensure all of its ICT is accessible. This is a bit of an un-nuanced perspective. Section 508 declares that the government purchase the most accessible product that meets business needs. If there’s only one product that meets business needs and is horribly inaccessible, Section 508 doesn’t prevent them from buying.

Given the above, imagine the following scenario: The government issues an RFP for a type of software that multiple vendors can provide. You submit your response and it includes a VPAT that declares a very high level of compliance with Section 508’s technical provisions. All other things being equal, the agency chooses your product over the competition due to your product’s higher level of accessibility. There’s only one problem: your VPAT isn’t accurate. Come time to deploy the software throughout the agency, it turns out that blind employees can’t use the software at all. Now what happens?

There are a couple of ways that could play out, and most of them are very unpleasant.

  1. The agency can cancel the contract
  2. The agency can refuse to extend the contract when it comes time to renew
  3. The agency can sue you to compel you to meet the claims of your VPAT

Of course there’s always the chance that nothing happens. That depends on the size of the contract and amount of impact the non-compliant system has on the agency. One thing is for sure, once the agency starts getting complaints from employees, you’re going to have a headache on your hands that won’t go away.

Still need the VPAT/ GPAT in a hurry?

I definitely understand the need to get your RFP submission in on time. At this point, you’re probably the only person with the necessary product knowledge to fill out the document. I recommend that whatever you do, you’re honest in your response. Here are some hints.

  1. Do not claim your product “Fully Supports” a provision unless you know for a fact that it does. “Fully Supports” should be reserved for cases where you know that the provision applies and that your product meets all requirements for conformance.
  2. If you have any evidence that your product doesn’t support 100% of the requirements (but supports some) then the answer is “Partially Supports”. Add comments to disclose the problems you’re aware of.
  3. If you have any evidence that your product fails the requirement (or fails more than it meets) then the answer is “Does not support”. Again, add comments to disclose the problems you’re aware of.
  4. If you know that a provision does not apply at all, the answer is “N/A”

In the explanations column, provide honest, concise information to backup your support claim. If you don’t know the answer or don’t understand the requirements for a provision, you’ll need to craft an honest declaration of this rather than make something up.

Final word on your urgent VPAT/ GPAT

Honesty, transparency, and accuracy are the key to avoiding problems. Your next step is to find someone to do an informed assessment and write an accurate VPAT. Do it now instead of waiting until the last minute.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

The best salesman I’ve ever met wasn’t a salesman

I don’t think I’ve met anyone who owns their own business who is truly satisfied with how much they’re selling. For Tenon, sales means new features, so when I’m looking at a backlog of awesome ideas I’m also calculating the time & money it would take to develop those ideas. More sales means more money which means more features. Figuring out the right sales and marketing approach is something I wish came easier. Thinking about this recently I was reminded of the best salesman I’ve ever met.

I worked for Bill Killam from 2004 – 2007. Bill runs a small usability consulting firm called User-Centered Design. Bill had worked for UserWorks for a while and had chosen to go solo. Our paths crossed when I was E-Commerce Manager for NASA Federal Credit Union. I was shopping around for some usability help on our website. I had been extolling the virtues of usability for a while but nobody would listen. Suddenly the CEO of the Credit Union came across an article on usability and now wanted to make the website more usable. Bill Killam was one of many usability consultants I contacted off the UPA directory to get a quote.

I must’ve sat through a half-dozen calls discussing what I wanted to do with the website and hearing pitches from sales people. Each sales person followed up with a big fancy presentation brief and a detailed quote. After talking with Bill, he sent over a plain text email. His email contained a paragraph of about 100 words describing what he thought I should do and a price for what that work would cost. No fancy sales pitch, no presentation, no fat project brief. All steak, no sizzle. Sales people might read the above and cringe. Bill himself might read the above and cringe. But to me it was pure genius, because he was the only person I talked to in that process who actually demonstrated to me that he listened to my problem and in that single plaintext email he related to me how his deliverables would address my problem.

Ultimately, I didn’t end up using Bill for the work on the Credit Union website. Instead, I went to work for him. A few months after I started that search for a Usability consultant to help on the Credit Union’s site, Bill hired me to do development work and in that time I got to see multiple examples of his incredible sales skills.

The thing was, Bill actually never really thought of himself as a salesman, which is why he was actually so good. Instead, he was a problem solver. He never tried (at least not that I could tell) to persuade anyone to do anything. He never “pitched” people. Instead, he went into sales meetings almost like it was a project kickoff meeting. He didn’t just “assume the sale”, he actually went into the meetings with the perspective that he already had the gig, he just needed to scope it out properly to give them the right price. He never went into long-winded dialogs filled with superlatives. He never droned on about features and benefits. He listened to the customer, learned what their problems were, and proposed ways to solve those problems.

Sales people are too quick to treat a sales call as an event where they must head off any objections. They think they need to convince the customer to buy. They interrupt customers as soon as they see an opportunity to talk more about their product or service. The problem is that the more time you spend talking the less time you’re spending listening. Listening is how you learn about the customer’s problem. Listening gives you the opportunity to understand the customer’s pain points. The salesperson thinks that their job is to make a sale. It isn’t. The sales person’s job is to determine what the customer’s problems are and present appropriate solutions to those problems. Bill understood that and, as a consequence, never had a problem selling his services.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Extreme Accessibility, Revisited

I’m really terrible about responding to emails. I get (and send) a lot of email and lots of it just sits and sits for embarrassingly long times. After CSUN I got a great email with some questions from Vincent François about some of the things I said during my CSUN 2016 presentation “Extreme Accessibility

Vincent had a couple of good follow-up questions to the information I talked about that I wanted to share the answers to:


When bugs are found, tests are written. Programmers have a failed test to focus their efforts and know when the problem is fixed

Vincent included a picture of the above slide and said:

Are you saying that we should create a new test each time we find a bug? In order to, later, focus on the part of the product in which we encountered and created the bug?

My answer:

One of the things about bugs is that once you’ve verified the bug, you actually have two important pieces of information: the details on why it is a problem and what should happen instead. With this information at hand, you can write a test. The test should be an assertion that the code does what it is supposed to do.

Taking accessibility out of the picture, for a minute, imagine we are on Amazon. We have found a bug: when we click the “Add to wish list” button, it actually adds the product to the cart. The first step to fixing the bug is writing a test that tests the following:

  1. Given that I am on a product page
  2. When I click the “Add to wish list” button:
  3. The number of items in the Cart is not increased
  4. The number of items in the wish list increases by 1
  5. The specific product is found in the wish list

The above is the criteria we will use to verify that the bug was fixed. Now we modify our code and test it against the above criteria. We don’t ship the bug fix until the tests pass.

The automated test(s) also ensure that we don’t end up “unfixing” the bug later down the road.

The latter point above is super important. One of the biggest pieces of tech debt I run into are cases where an untested fix gets undone somehow – or worse, has another side effect elsewhere. This is why if someone reports a buggy test in Tenon I ask for the URL of the page they’re testing and use that URL to verify whether i’ve fixed the test bug.

Vincent’s next question was:

With a11y-specific tests, is there a risk to separate a11y from overall quality and in some cases, to choice to postpone them?

This is an important consideration. One of the things I really harp on during Tenon sales demos is the ability to put Tenon into your existing toolset. This is important not just for convenience or efficiency but also (I hope) so that it keeps accessibility from being seen as a separate thing.

Here’s my answer:

The a11y-specific tests I advocate for are only those tests which are directly aimed at verifying an accessible experience. I think in the presentation I used the example of a modal dialog. In a normal development scenario you might have a case where the developer writes a unit test around whether the dialog opens and closes. But there are accessibility implications with modal dialogs including things like keyboard accessibility and focus management. These require their own tests. Thankfully these patterns have been provided for us by the fine folks at the W3C. We can take those patterns and turn them into test cases and these test cases can be turned into automated tests as well. The best part about this part is that we now have accessibility testing backed in to our normal process and not really separate.

Upcoming Event this week

On Thursday August 18, 2016 I’ll be taking part in a panel discussion, "Including Accessibility in the Agile Development Process". Other panelists include Mark Urban (CDC Section 508 Coordinator), Tony Burniston (FBI), and Matt Feldman (Deque Systems)

This FREE will be held at the National Science Foundation at 4201 Wilson Boulevard, Arlington, VA 22230, Stafford 1 Building in conference room 375 from 8:30 to 3:30.

NSF is located in the Ballston area of North Arlington, Virginia, between Wilson Boulevard and Fairfax Drive, one block south of the Ballston-Marymount University Metro stop on the Orange Line. Parking is available in the Ballston Common mall, in the NSF building, and at other area parking lots and garages. Metered parking is also available on the surrounding streets.

Visitors are asked to check in at the Visitor and Reception Center in the Stafford 1 building, on the first floor to receive a visitor pass, before going on to meetings, appointments, or other business at the NSF. Visitors bringing computers into NSF should consult the NSF Computer Security Policy before arriving.

For directions go to http://www.nsf.gov/about/visit/

NOTE! To attend, you must register: https://registration.section508.gov/

Should you use more than one automated accessibility testing tool?

If you’re already aware of Betteridge’s Law, then you know the answer already.

There are some that would argue that you need to use multiple tools because automated accessibility tools can’t find everything and because each tool takes its own approach to testing – including what they specifically test for. This sounds spot on, but misses the point about automated testing entirely.

I believe the primary benefit to automated testing is the efficiency and repeatability they add to the test process. Using multiple tools for testing will only add work, thereby reducing the efficiency benefit of using the tool. In addition, irrespective of the quality & accuracy of each tool, the other problem that arises is differing guidance. Even if they found the same things, each tool will use different words to describe the issues. For instance Tenon will say “This form field is missing a label” and AMP will say “Provide explicit labels for form fields”. While they’ve both found the same issue the user now needs to interpret the messages to determine if they are pointing out the same thing. Add the fact that each tool may find different things, may apply different severities, and may provide different guidance and now you’re really losing efficiency because now you need to determine which tool is right and where. Now you are adding a lot of work for very little benefit.

We already know that there’s only so much that automated testing tools can find. Automated testing isn’t the end but rather the beginning of the process. Tools aren’t a replacement for expert review but rather a supplement to it. Using > 1 tool doesn’t close that gap effectively and instead adds unnecessary work that’s probably time better spent with manual testing. The best approach is to find a tool you like, become an expert user of it, become familiar with how it works (including its shortcomings) and use it.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Is WCAG 2.0 too complicated?

A couple of weeks ago now, an article was posted on LinkedIn that implied WCAG was “Impossible”. Numerous others, including myself, levied sharply negative responses to the article, but not to this specific claim about WCAG being “impossible”. I’d like to help my readers understand WCAG a little bit better.

Generalized statements are particularly false

The first thing to understand is that while it is easy to find things to say that criticize WCAG, many criticisms don’t stand up in context. Many people say WCAG is too long. I say it just feels that way and in many cases you don’t need an encyclopedic knowledge of WCAG and its associated materials. You need what is relevant to you in the context of your work. Sure, the volume of techniques and failures is massive, but the amount that are actually relevant to you is probably much smaller. Think about it: if you aren’t using SMIL or Silverlight or Flash, you can safely ignore those techniques and focus on what applies to you.

Accessibility, as a topic, can seem pretty nebulous if you’ve never been expected to make your systems accessible. I know this from experience. When I first got interested in accessibility, I knew nothing and made a lot of mistakes. Thankfully there were resources like the WebAIM discussion list where people freely shared information in a friendly environment. The regular contributors to that list helped me understand that accessibility isn’t hard when you learn to consider accessibility along the way.

Steps to Understanding WCAG

It is easy to call WCAG “impossible” if you don’t understand it. In fact, this is exactly why Billy Gregory and I came up with the idea of our talk Why WCAG? Whose Guideline is it, Anyway? with The Viking and The Lumberjack at CSUN this spring. Our point during the talk is that there are a number of things people misunderstand and our goal with the talk was to help people understand those things. It generated lots of talk and even controversy but hopefully also helped people understand WCAG a bit more. For those who weren’t there, let me help clarify WCAG.

WCAG is a Standard

WCAG probably feels long, but as a standard it really isn’t very long at all. The presentational format of W3C documents in general definitely doesn’t help, but as a W3C recommendation its normative content is actually pretty short. The full WCAG 2.0 itself is well organized if you stop and let it soak in a minute. It has a clear Introduction that describes its purpose, structure, and associated materials, and the content itself is well organized. The wall-of-text that is the presentation format doesn’t help the impression of its length, but once you understand the organization of WCAG, it won’t feel so long.

Many people have criticized the decidedly high reading level that WCAG is written in. I can’t say I disagree. It is dense information, but it is also clear. We must remember, WCAG is a Standard. It has been incorporated into international laws and (eventually) into United States laws. A standard can’t get to that point if it is contradictory or lacking in detail. At times, the WCAG Working Group agonized over the meanings of individual words – the type of activity I personally have no stomach for – and we should thank them for it. If you ever find yourself not understanding terms and phrases like “changes in context”, consult the glossary. (Hint: you can also ask for clarification on the WAI-IG mailing list)

Peel the Onion to get what you want

As I mentioned above, one of the things that make WCAG seem dense is its presentation. But what most people seem to miss so often is the true structure of WCAG. They discuss this in “Layers of Guidance” that I simplify below:

  • Principles – At the top are four principles that provide the foundation for Web accessibility: perceivable, operable, understandable, and robust.
  • Guidelines – Under the principles are guidelines. There are 12 guidelines that provide the basic goals that authors should work toward
  • Success Criteria – For each guideline, testable success criteria are provided.
  • Sufficient and Advisory Techniques – For each of the guidelines and success criteria in the WCAG 2.0 document itself, the working group has also documented a wide variety of techniques.

The numeric structure of WCAG’s Success Criterion follows the above structure: Principle, Guideline, Success Criteria. To Illustrate this let’s look at 1.3.1 Info and Relationships:

  1. Principle 1: Perceivable
  2. Guideline 1.3: Adaptable
  3. Success Criterion: 1.3.1 Info and Relationship

To understand WCAG and make it feel much less “impossible” you should first understand the Principles. This is the spirit of WCAG – the things that an accessible system should meet for the user. Everything else about WCAG simply provides more detail on how to meet those goals.

Next up are the Guidelines. There are 12 guidelines. These are the high-level goals for each Principle. For instance: “Guideline 2.1 Keyboard Accessible: Make all functionality available from a keyboard.”

Finally, the Success Criteria. In other words, these are the specific, testable criteria against which conformance is to be judged.

I’m of the opinion that those who criticize WCAG as being “impossible” are concentrating on the Success Criterion first without absorbing the Principles and Guidelines first. To truly get the value of WCAG, it is vital to first absorb and understand it from the top down: Principles, then Guidelines, then Success Criteria. The Success Criteria are worded very specifically and clearly. If, at any time, you find yourself feeling like a Success Criteria is confusing, run through it a couple of times with a careful read.

2.1.2 No Keyboard Trap: If keyboard focus can be moved to a component of the page using a keyboard interface, then focus can be moved away from that component using only a keyboard interface, and, if it requires more than unmodified arrow or tab keys or other standard exit methods, the user is advised of the method for moving focus away. (Level A)

To understand the above, first we need to understand that this is part of Principle 2: Operable. The goal for this is “User interface components and navigation must be operable.”. Also, this is the first guideline under Principle 2. “Guideline 2.1 Keyboard Accessible: Make all functionality available from a keyboard.” Let’s parse the Success Criterion itself by breaking it down:

  1. If keyboard focus can be moved to a component of the page using a keyboard interface,
  2. then focus can be moved away from that component using only a keyboard interface,
  3. and, if it requires more than unmodified arrow or tab keys or other standard exit methods,
  4. the user is advised of the method for moving focus away.

Our decision tree for testing this is then:

  1. Can focus be moved to a component of the page?
  2. Can that focus be moved to the component using a keyboard? (If not then there’s a 2.1.1 issue)
  3. Can that focus be moved away using a keyboard?
  4. Can focus be moved away standard exit methods? (read as: tab or shift+tab, but this might depend on the type of control)
  5. If standard exit methods can’t be used, is the different method disclosed to the user?

I realize that there are a couple of pieces of information that a layperson may not know in terms of how to test the above, especially when it comes to “standard exit methods”. This is where WCAG’s large volume of related documents come into play. You should explore these documents as necessary to get the additional information necessary to close the gap in understanding.

Impossible is Nothing

To borrow a phrase from Robert Pearson, “Impossible is Nothing” WCAG is not impossible. It would have never reached Final Recommendation status if it was impossible. Although unquestionably dense in nature, a careful read after understanding its structure can help considerably.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

How long does it take to test 25 Billion web pages

If you started it during the reign of Thutmose I of Egypt, you’d be done soon.
Or you could invest several million dollars.
Or maybe doing it is just a stupid idea in the first place

On July 6, 2016, Michelle Hay of the company Sitemorse published an “article” (a term I’m using loosely here) titled “WCAG 2.0 / Accessibility, is it an impossible standard that provides the basis for excuses?“. Overall, I found the article to be very poorly written, based on a false premise, and a demonstration of extreme ignorance at Sitemorse. Many others felt the same way, and Léonie Watson’s comments address many of the factual and logical shortcomings of the article. Something I personally found interesting in the Sitemorse article are the following two sentences:

What we are suggesting, is to create a list of priorities that can be done to improve accessibility. This will be based on the data we have collected from 25+ billion pages and feedback from industry experts, clients and users.

25 billion pages is a massive number of pages. It is also extremely unlikely to be true and definitely not at all useful . To prove my point, I’ve used Tenon to gather the data I need.

Historically, Tenon averages about 6 seconds per distinct URL to access each page, test it, and return results. There are a number of factors involved in the time it takes to process a page. We frequently return responses in around a second, but some pages take up to a minute to return a response. I’ll discuss the contributing factors to the response time in more detail further down below.

Tenon does its processing asynchronously, which means that it won’t get choked down by those pages that take a longer time to test. In other words, if you test 100 pages it won’t take 6*100 seconds to test each one. The average time needed across the entire set will be shorter than that because Tenon returns results as soon as they’re available in a non-blocking fashion. For example, if one page takes 30 seconds to test, Tenon could easily test and return results for a dozen or more other pages in the meantime. The goal of this experiment is to see how long it would take to test 25,000,000,000 pages using Tenon.

Sitemorse’s article does not disclose any details about their tool. Their website is chock full of vague platitudes and discloses no substantive details on the tool. They don’t even say what kind of testing it does. Regardless, given my personal history with automated tools, I’m fairly confident that across an identical sample size Tenon is at least as fast, if not faster.

Test Approach

The test approach I used is outlined as follows, in case anyone wants to replicate what I’ve done:

  1. I wanted to test at least 16,641 distinct URLs. Across a population size of 25,000,000,000 URLs, this gives us a 99% Confidence Level with a Confidence Interval of just 1.
  2. The list of URLs piped into Tenon all come from a randomized list of pages within the top million web domains listed by Alexa and Quantcast.
  3. The testing was performed on a completely fresh install of Tenon on my local machine. That means no other users on the system, no other processes running, and all available resources being dedicated to this process (subject to some caveats below)
  4. This testing used a Bulk Tester that populates a queue of URLs and submits URLs to the Tenon API at a rate of 1 URL per second via AJAX. It does this testing asynchronously. In other words, it just keeps sending the requests without ever waiting for a response. I could have reduced the time between requests, but it was a local install and I didn’t want to DOS my own machine that I’m also using for work while this is going on.
  5. While the bulk tester does other things like verifying that the API is up and verifying the HTTP status code of the tested page before sending it to Tenon’s API, the elapsed time is tracked solely from the time the API request is sent to the time the API responds. This avoids the count being skewed by the bulk tester’s other (possibly time-intensive) work.

Caveats and Concerns

This test approach carries a few caveats and concerns. In an ideal world, I’d would have deployed a standalone instance that fully replicates our Production infrastructure, including load balancing, database replication, and all of that. I don’t think that’s truly necessary given the stats on my local machine, which I discuss below.

Assessing System

Any accessibility testing software will be subject to the following constraints on the machine(s) hosting it. These factors will impact the ability to respond to the request, assess the document, and return a response:

  1. Available Memory: Memory allows a testing tool to store frequently accessed items in cache. For instance, Tenon makes extensive use of Memcached in order to store the results of repetitious queries on infrequently changing data. The Macbook Pro I used for this has 16GB of 1600 Mhz DDR3 RAM.
  2. Available CPU: The more CPU and the more cores means the server can do more work. The Macbook Pro I used for this has a 2.8GHz Intel Core i7 Processor. The processor has 4 cores.
  3. Network performance: Simply put, the faster the connection between the Assessing system and the Tested System, the less time necessary to wait for all of the assets to be transferred before testing can begin. I’m on a Verizon FiOS connection getting 53MBps both up and down.

Overall, Tenon performs well as a local install. That said, it would be more “scientific” if this was the only thing the machine was doing, but like I said before, it is my work machine. In a Production environment, Tenon is provisioned with far more resources than it needs so it retains its responsiveness when under high demand. Provisioned locally on a Virtual Machine, Tenon doesn’t require very much RAM, but it loves CPU. Although the amount of CPU I provide to the VM is sufficient, I’m also aware that I could easily throw more requests at it if I could fully dedicate all 4 cores to the VM. Also, there were times when the local Tenon install competed heavily for network traffic against Google Hangouts and GoToMeeting. All-in-all, I doubt that across the entire test set the local instance’s power will play too heavily into the results.

Tested System

All of the above concerns apply to the tested system. The following additional concerns on each tested URL may also impact the time needed to return results:

  1. Client-side rendering performance: One of Tenon’s most important advantages, in terms of accuracy, is that it tests the DOM of each page, as rendered in a browser. This gives Tenon significant power and broadens the range of things we can test for. One downside to this is that Tenon must wait for the page and all of its assets (images, CSS, external scripts, etc.) to load in order to effectively test. A poorly performing page that must also download massive JavaScript libraries, unminified CSS, and huge carousel images will take longer to test. For instance, if a page takes 10 seconds to render and 1 second to test, it will take a total of 11 seconds for Tenon to return the response. This is probably the most significant contributor to the time it takes to test a page in the real world.
  2. Size of the document/ Level of (in)accessibility: Among the many factors that contribute to the time it takes to assess a page and return results is how bad the page is. In Tenon’s case, our Test API doesn’t test what isn’t there. For instance, if there are no tables on a page then the page won’t be subjected to any table-related tests. In other words, even though Tenon can test nearly 2000 specific failure conditions, how many of those that it actually tests for is highly dependent on the nature of the tested document – smaller, more accessible documents are tested very quickly. The converse is also true: Larger, more complex, documents and documents with a lot of accessibility issues will take longer to test. The most issues Tenon has ever seen in one document is 6,539.

Results

  • The very first result was sent at 7/12/16 21:59 and the very last result was 7/13/16 18:38.
  • The total number of URLs successfully tested was 16,792.
  • That is 74,340 seconds total with an average time across the set of 4.43 seconds
  • There were several hundred URLs along the way that returned HTTP 400+ results. This played into the total time necessary, but I purged those from the result set to give Sitemorse’s claim the benefit of the doubt.

Total Issues

Minimum 0.00
Maximum 2015.00
Mean 66.89
Median 37.00
Mode 0.00
Standard Deviation 85.79
Kurtosis 49.02
Skewness 4.35
Coefficient of Variation 1.28

Errors

Minimum 0.00
Maximum 2011.00
Mean 47.51
Median 28.00
Mode 0.00
Standard Deviation 67.92
Kurtosis 126.13
Skewness 7.46
Coefficient of Variation 1.43

Warnings

Minimum 0.00
Maximum 464.00
Mean 19.38
Median 1.00
Mode 0.00
Standard Deviation 55.10
Kurtosis 12.39
Skewness 3.64
Coefficient of Variation 2.84

Elapsed Time, (measured on a per URL basis)

Minimum 0.37 seconds
Maximum 49.87 seconds
Mean 9.50
Median 7.50
Mode 6.43
Standard Deviation 7.26
Kurtosis 3.20
Skewness 1.66
Coefficient of Variation 0.77

Using this to assess Sitemorse’s claim

As a reminder, the sample size of 16,792 pages is more than enough to have a 99% Confidence Level with a Confidence Interval of just 1. One possible criticism of my methods might be to suggest that it would be more “real-world” if I tested pages by discovering and accessing them via spidering. That way true network and system variations could have had their impact as they normally would. Unfortunately that would also add another unnecessary factor to this: the time and resources necessary to run a spider. Having all of the URLs available to me up front allows me to focus only on the testing time.

Given this data, lets take a look at Sitemorse’s claim that they’ve tested 25,000,000,000 pages:

At 4.43 seconds per page, it would have taken Sitemorse’s tool 3,509.5 years to test 25,000,000,000 pages running around the clock – 24 hours a day, 7 days a week, 365 days a year with zero downtime. Could they have done it faster? Sure. They could have used more instances of their tool. All other things being equal, running 2 instances could cut the time in half. With an average assessment time of 4.43 seconds, they would need 3,510 instances running 24/7/365 to do this work in less than a year.

(4.43 seconds each * 25,000,000,000) / (60 seconds per minute * 60 minutes per hour * 24 hours per day * 365 days per year)

Using Tenon’s average monthly hosting costs, testing 25,000,000,000 pages would cost them nearly $10,530,000 in server costs alone to run the necessary number of instances to get this analysis done in less than a year. This monetary cost doesn’t include any developer or server admin time necessary to develop, maintain, and deploy the system. The Sitemorse article doesn’t disclose how long the data gathering process took or how many systems they used to do the testing. Regardless, it would take 351 instances to perform this task in less than a decade.

Why have I focused on a year here? Because that’s the maximum amount of time I’d want this task to take. They could have done it across the last decade for all we know. However, the longer it takes to do this testing the less reliable their results would be. Across a decade – nay across more than a year – the more likely that technology trends would necessitate changes to how & what is tested. A few years ago, for instance, it was prudent to ensure all form fields had explicitly associated LABEL elements. Now, with the proliferation of ARIA-supporting browsers and assistive technologies, your tests need to include ARIA in your testing. Data gathered using old tests would be less accurate and less relevant the longer this process took. I realize I’m assuming a lot here. They could have continually updated their software along the way, but I strongly doubt that to have been the case. Keep in mind that this 24/7/365 test approach is vital to getting this process done as fast as possible. Any downtime, any pause, and any change along the way would have only added to the time.

Giving them the benefit of the doubt for a moment, let’s assume they had the monetary and human resources for this task. Even if they did something like this, it also begs the question: Why?

The entire idea is ridiculous

I’m not saying that it isn’t possible to test 25,000,000,000 pages. In fact massive companies could perform such a task in no time at all. But I also think doing it is a ridiculous idea. And when I say “ridiculous” I mean it in the strictest sense of the word. No matter how they performed a project such as using 351 instances across a decade or 3,510 instances for less than a year, or something in between, doing so is an ignorant, uninformed, and useless pursuit. It indicates a woeful lack of knowledge and experience in development, accessibility, and statistics.

In their article they state:

With this information we will consider the checkpoints of WCAG 2.0 and come up with 10 things that should be dealt with to improve accessibility which will all be understandable, manageable, measurable and achievable.

The idea of making such decisions based on rigorous data gathering sounds impressive. I have a lot of respect for approaches draw their conclusions from data rather than opinion. The question that must be asked, however, is whether or not the type of information they seek might already exist or, barring that, could it be gathered using a different/ cheaper/ faster/ more accurate approach. If you were to ask accessibility experts what their “Top 10 Things” are, you’d get a pretty wide variety of answers. You’d probably get things that are vague, overly broad, or driven by personal bias. However, if you were to moderate such a process using the Delphi Method [PDF] you’d probably come to concensus rather quickly on what those “Top 10 Things” would be. In fact, I argue that given a hand-picked list of respected industry experts, this process could be completed in a weekend. This illuminates the first characteristic of Sitemorse’s claim that makes it worthy of ridicule.

The second characteristic that makes this claim worthy of ridicule is the fact that they used an automated tool for this task. That’s right, I’m the founder of a company that makes an automated tool and I’m telling you that using data from an automated tool to do research like this is stupid. This is because there’s only so much that an automated tool can detect. Automated testing tools are not judges. They cannot prove or disprove any claims of conformance and they cannot even definitively tell you what the most frequent or highest impact issues are on a specific page.

Automated testing tools are excellent at doing one thing and one thing only: finding issues that the tool has been programmed to find. Nothing more. Any time you use an automated testing tool, you’re subjecting the tested system against a pre-defined set of checks as determined by the product’s developer. The nature, number, accuracy, and relevance of those checks will vary from one tool to the other. There are a large number of things that cannot be tested for via automation and an equally large number of things that are too subjective to test for.

The application of automated testing results to a process like this is only relevant if it is being used to validate the “Top 10 Things” that were determined by the experts. I believe that taken on their own, the opinions of experts and the data gathered from a tool would differ significantly. For instance, one of the Top 10 issues – by volume – detected by Tenon is for images that have alt and title attributes that are different. The reason we raise this issue is because there’s a likelihood that only one of these values is the actual text alternative for the image. Supplying both attributes – especially when they’re different from each other – leaves you with at least a 50/50 chance that the supplied alt is not an accurate alternative. After all, what could be the possible purpose of providing the differing title? Even though that’s a Top Ten issue by volume, it certainly isn’t going to make any Top Ten list created by experts. In the vast majority of cases this issue could be best characterized as an annoyance, especially because the information is (ostensibly) there in the DOM and can be discovered programmatically.

Finally there’s the complete lack of understanding of statistics and sample sizes. If we assume that the purpose of Sitemorse’s testing of 25,000,000,000 pages is to gather statistically significant information on the accessibility of the web, they’ve overshot their sample size by a ridiculous amount. And again, by “ridiculous” I truly mean worthy-of-ridicule. The size of your sample should be large enough that you’ve observed enough of the total population that you can make reliable inferences from the data. A small sample means that you’ll be unable to make enough observations to compensate for the variations in the data. The sample size, when compared to the population size, allows you to calculate a Confidence Level and Confidence Interval. In layperson terms, the Confidence Level is how “certain” you can be that your results are accurate. The Confidence Interval is also what people refer to as the margin of error. For instance, if you have a margin of error of “2” then the variance in the actual result could be plus or minus “2”. If I said the average result of a survey is “10” with a Confidence Interval of “2” then the actual answer could be between “8” and “12”.

What kind of sample size do you need to make inferences on the accessibility of the entire web? You might think that number would be pretty massive. After all, the total number of web sites is over 1 Billion and growing literally by the second. How many distinct URLs are there on the web?

In August 2012, Amit Singhal, Senior Vice President at Google and responsible for the development of Google Search, disclosed that Google’s search engine found more than 30 trillion unique URLs on the Web… (Source)

Apart from the statement above, getting an authoritative and recent number on the total number of distinct URLs is really difficult. Fortunately it doesn’t really matter because 30 Trillion unique URLs is, for our purposes, the same as Infinity. Sample size calculation isn’t a set relative amount based on the population size. After a certain point, you don’t really add much in the way of reliability to your inferences just because you’ve gathered a huge sample. Once you’ve gathered a sufficiently large sample, you could double it, triple it, or even quadruple it and not get any more reliable data. In fact, doing so is a waste of time and money with zero useful return.

What’s the right size of the necessary sample pages? 16,641. In other words, it is the same for the entire web as it is for Sitemorse’s claim that they tested 25,000,000,000 pages. This is because, as I’ve said, there comes a point where continued testing is wholly unnecessary. Sitemorse claim to have tested 24,999,983,359 more pages than they needed to. A sample size of 16,461 has a 99% Confidence Level with a Confidence Interval of just 1. If you want 99.999% Confidence Level you could bump the sample size to around 50,000 but I’m willing to bet the results wouldn’t be any different than if you’d stuck with 16,461.

In other words, Sitemorse didn’t just overshoot the necessary sample size. Even if we say we want a 99.999% Confidence Level, they overshot the necessary sample size by over fifty thousand times. That’s not being extra diligent, that’s being colossally stupid. They could have gotten the same data by investing 0.0002% as much work into this effort.

What does this mean?

I can’t speak to Sitemorse’s intent in making this claim that they’ve tested 25,000,000,000 web pages. I can only comment on its level of usefulness and logistical likelihood. On these counts Sitemorse’s claim is preposterous, extraordinarily unlikely, and foolish. The level of effort, the necessary resources, overall calendar time, and money necessary for the task is absurdly high. Even if their claim of testing 25,000,000,000 web pages is true, the act of doing so illustrates that they’re woefully inept at doing research and all too eager to waste their own time on fruitless endeavors.

Why do I care about this? What started me down this path was simple curiosity. Tenon has some ideas of some research we’d like to take on as well. It was immediately obvious to me that Sitemorse’s claim of testing 25,000,000,000 pages was absurdly large, but I also immediately wondered how much time such an undertaking would require. I decided to write about it merely because of how absurd it is to test 25,000,000,000 pages.

The Sitemorse article is an obvious sales pitch. Any time someone says they have “special knowledge” but doesn’t tell you what that knowledge is, they’re using a well-known influencing technique. In this regard, Sitemorse isn’t any different than others in the market and certainly not more worthy of negative judgement than others who do the same thing. The only difference in this case is that they used a huge number to try to establish credibility for their “special knowledge” that actually harms their credibility rather than helps it. The reality is that there’s nothing special or secret out there.

An Actual Knowledge Share

Finally I would like to close off this post with a real knowledge share. While SiteMorse are attempting to hold special knowledge based on their research, I believe that the following information doesn’t actually hold any special surprises.

Top 10 issues, by Volume

  1. Element has insufficient contrast (Level AA)
  2. This table does not have any headers.
  3. This link has a `title` attribute that’s the same as the text inside the link.
  4. This image is missing an `alt` attribute.
  5. This `id` is being used more than once.
  6. Implicit table header
  7. This link has no text inside it.
  8. This link uses an invalid hypertext reference.
  9. This form element has no label.
  10. These tables are nested.

Issues by WCAG Level

WCAG Level Count Percent
Level A: 765278 52%
Level AA: 357141 24%
Level AAA: 339664 23%

Issues by WCAG Success Criteria

Success Criteria Num. Instances Percent
1.1.1 Non-text Content (Level A) 110582 7%
1.3.1 Info and Relationships (Level A) 195544 12%
1.3.2 Meaningful Sequence (Level A) 20613 1%
1.4.3 Contrast (Minimum) (Level AA) 352562 22%
1.4.5 Images of Text (Level AA) 255 0%
2.1.1 Keyboard (Level A) 204361 13%
2.1.2 No Keyboard Trap (Level A) 4901 0%
2.1.3 Keyboard (No Exception) (Level AAA) 185056 12%
2.3.1 Three Flashes or Below Threshold (Level A) 23 0%
2.3.2 Three Flashes (Level AAA) 23 0%
2.4.1 Bypass Blocks (Level A) 24286 2%
2.4.2 Page Titled 1033 0%
2.4.3 Focus Order (Level A) 18776 1%
2.4.4 Link Purpose (In Context) (Level A) 139296 9%
2.4.6 Headings and Labels (Level AA) 4324 0%
2.4.9 Link Purpose (Link Only) (Level AAA) 139296 9%
2.4.10 Section Headings (Level AAA) 15289 1%
3.1.1 Language of Page (Level A) 4497 0%
3.3.2 Labels or Instructions (Level A) 24248 2%
4.1.1 Parsing (Level A) 56843 4%
4.1.2 Name, Role, Value (Level A) 103883 6%

Issues By Certainty

Certainty Num. Instances Percent
40% 2371 0%
60% 323036 29%
80% 40453 4%
100% 756559 67%

Issues By Priority

Priority Num. Instances Percent
42% 2363 0%
47% 435 0%
51% 1033 0%
54% 352562 31%
57% 920 0%
65% 56843 5%
76% 5145 0%
81% 15289 1%
85% 161 0%
86% 255 0%
90% 34531 3%
96% 81881 7%
100% 571001 51%

Conclusion

  1. Nobody holds any special secrets when it comes to knowing how to make stuff accessible. If you’re interested in learning about accessibility there are already excellent resources out there from The Web Accessibility Initiative, WebAIM, and The Paciello Group. Each of those organizations freely and openly share their knowledge.
  2. (Not necessarily specific to Sitemorse) Anyone who claims to have special knowledge or expects you to sign up for their special downloadable whitepaper is full of shit and should be treated as such.
  3. Every. Single. Piece. Of. Data. Above. indicates one thing: The nature and volume of accessibility issues that are automatically detectable make it obvious that people constantly make high-impact-yet-easy-to-fix accessibility problems. Ignorance is the #1 roadblock to a more accessible web

Everything that this experiment showed me is that while we know for a fact that there’s only so much that automated testing can find there’s enough people making these common mistakes over and over. Further: you don’t need to run a tool to run a test against 25,000,000,000 pages to tell you this, all you have to do is listen to users with disabilities. Maybe Sitemorse should’ve started there.

Get the full data set used in this.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343