Karl Groves

Tech Accessibility Consultant
  • Web
  • Mobile
  • Software
  • Hardware
  • Policy
Telephone
+1 443.875.7343
Email
karl@tenon.io
Twitter
@karlgroves

“Should we detect screen readers?” is the wrong question

The recent release of WebAIM’s 5th Screen Reader User Survey has heated up a recently simmering debate regarding whether or not it should be possible to detect screen readers. Currently there are no reliable means of determining whether a user with disabilities is visiting your site and, specific to screen readers, this is because that information isn’t available as part of the standard information that is used in tracking users, such as user-agent strings. Modern development best practice has shied away from klunky user-agent detection and instead toward feature detection. The thought then, is that it should be possible to detect whether or not a visitor is using a screen reader. This has drawn sharply negative reactions from the accessibility community, including those who I’d have thought would have been in favor of the approach. In all cases, people seem to be ignoring a more obvious shortcoming of this idea: Accessibility isn’t just about blind people. Accessibility is about all people.

Getting data at a sufficient level of granularity is a bit difficult, but the conventional wisdom around disability statistics is:

  • There are more people who are low-vision than who are blind
  • There are more people who are hard of hearing than who are visually impaired
  • There are more people who are motor impaired than who are hard of hearing
  • There are more people who are cognitively impaired than all of the above

In fact, depending on age group this can vary. The Census Bureau data does validate the claim that across all age groups the percentage of people who are visually impaired is consistently the smallest of all disability types. In other words, if your approach to accessibility has anything to do with detecting screenreaders, you’ve clearly misunderstood accessibility.

But let’s skip that for a moment. Let’s assume you could detect a screen reader as easily as including Modernizr on your site. Now what? What do you do differently? Well, no matter what you do, your approach “solves” accessibility issues for less than 2% of the working-age population. Put another way, whatever money or time you’ve spent on detecting and adapting to screen reader users, you only gotten yourself 1/5 of the way toward being “accessible”. Instead of asking whether it should be possible to detect screen readers, the question should be “how do we make our site more usable for all users?”.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Everything you know about accessibility testing is wrong (part 2)

In Everything you know about accessibility testing is wrong (part 1) I left off talking about automated accessibility testing tools. It is my feeling that a tool of any kind absolutely must deliver on its promise to make the user more effective at the task they need the tool to perform. As a woodworker, I have every right to expect that my miter saw will deliver straight cuts. Further, if I set the miter saw’s angle to 45 degrees, I have the right to expect that the cut I make with the saw will be at an exact 45 degrees. If the saw does not perform the tasks it was designed to do and does not do so accurately or reliably, then the tool’s value is lost and my work suffers as a consequence. This is the case for all tools and technologies we use, and this has been the biggest failing of automated testing tools of any kind, not just those related to accessibility. Security scanning tools tend to generate false results at times, too.

I’m not sure if this is their exact motivation, but it often seems as though accessibility testing tool vendors interpret their tool’s value as being measured by the total number of issues they can report on, regardless of whether those issues are accurate. In fact nearly all tools on the market will tell you about things that may not actually be an issue at all. In this 2002 evisceration of Bobby, Joe Clark says “And here we witness the nonexpert stepping on a rake.” He goes on to highlight examples of wholly irrelevant issues Bobby had reported. From this type of experience came the term “false positives”, representing issues reported that are inaccurate or irrelevant and it is a favorite whipping post for accessibility testing tools.

It would be easy to dismiss false positives as the result of a young industry, because nearly all tools of the time suffered from the same shortcoming. Unfortunately even today this practice remains. For example, in the OpenAjax Alliance Rulesets, merely having an object, embed, applet, video, or audio element on a page will generate nearly a dozen error reports telling you things like “Provide text alternatives to live audio” or “Live audio of speech requires realtime captioning of the speakers.” This practice is ridiculous. The tool has no way of knowing whether or not the media has audio at all let alone whether the audio is live or prerecorded. Instead of reporting on actual issues found, the tool’s developer would rather saddle the end user with almost a dozen possibly irrelevant issues to sort out on their own. This type of overly-ambitious reporting does more harm than good on both the individual website level and for accessibility of the web as a whole.

No automated testing tool should ever report an issue that it cannot provide evidence for. Baseless reports like those I mentioned from the OpenAjax alliance are no better than someone randomly pointing at the screen and saying “Here are a dozen issues you need to check out!” then walking out of the room. An issue report is a statement of fact. Like a manually entered issue report, a tool should be expected to answer very specifically what the issue is, where it is, why it is an issue, and who is affected by it. It should be able to tell you what was expected and what was found instead. Finally, if a tool can detect a problem then it should also be able to make an informed recommendation of what must be done to pass a retest.

False positives (or false negatives, or whatever we call these inaccurate reports) basically do everything but that. By reporting issues that don’t exist, they confuse developers and QA staff, cause unnecessary work, and harm the overall accessibility efforts for the organization. I’ve observed several incidents where inaccurate test results caused rifts in the relationships between QA testers and developers. In these cases, the QA testers believe the tool’s results explicitly. After all, why shouldn’t they expect that the tools results would be accurate? As a consequence, QA testers log issues into internal issue tracking systems that are based on the results of their automated accessibility-testing tool. Developers then must sift through each one, determine where the issue exists in the code, attempt to decipher the issue report, and figure out what needs to be fixed. In cases where the issue report is bogus, either due to inaccuracy or irrelevance, it generates – at the very least – unnecessary work for all involved. Worse, I’ve seen numerous cases where bugs get opened, closed as invalid by the developer, and reopened after QA tester retests it because they’ve again been told by the tool that it is still an issue. Every minute developers and QA testers spend arguing over whether an issue is real is a minute that could be spent on remediation efforts for issues that are valid. Consequently, it is best to either avoid tools prone to such false reports or to invest the time required to configure the tool in a way that squelches whatever tests are generating them. By doing so the system(s) under development are likely to get more accessible and developers less likely to brush off accessibility. In fact, I envision a gamification type impact to this approach of only reporting and fixing real issues. A large number of these “definitively testable” accessibility best practices can often be quick to fix with minimal impact on the user interface. Over time, developers will instinctively avoid those errors as accessible markup will become part of developers’ coding style and automated accessibility testing can remain part of standard development and QA practices, instead finding anomalous mistakes rather than instances of bad practice. This possibility can never exist while trying to decipher which issues are or are not real problems because developers are instead left feeling like they’re chasing their tails.

Current automatic accessibility testing practices take place at the wrong place and wrong time and is done by the wrong people

Automated testing tools can only test that which they can access. Historically this has meant that the content to be tested has to exist at an URL that can be accessed by the tool, which then performs a GET request for the URL, receives the response and, if the response is successful, tests the document at that URL. Implicitly this means that work on the tested document has progressed to a point where it is, in all likelihood, close to being (or is, in fact) finished. That is, unless the tested URL is a “mockup” and the tool resides in or has access to the same environment as the development environment. Historically the experience has been that the tested documents have been deployed already. This is the worst possible place and time for accessibility testing to happen because at that point in the development cycle a vast array of architectural, design, workflow, and production decisions have been made that have a direct impact on the team’s ability to fix many of the issues that will be found. This is especially true when selecting things like front-end JavaScript frameworks, MVC frameworks, selecting colors, or when creating templates and other assets to be presented via a Content Management System. In each case, early pre-deployment testing could help determine whether additional work is needed or whether different products need to be selected. Post-deployment remediation is always more costly, more time consuming, and is less likely to be successful. In all cases, accessibility testing that is performed late in the development lifecycle has a very real and very negative impact, including lost sales, lost productivity, and increased risk to project success. Late testing also increases the organization’s risk of litigation.

The best way to avoid this is, as the popular refrain goes: “Test early and test often”. Usability and accessibility consultants worldwide frequently lament that their clients don’t do so. This website, for instance, happens to perform very well in search engines for the term “VPAT”, and about once a week I get a call from a vendor attempting to sell to a US government agency that has asked for a VPAT. The vendor needs the VPAT “yesterday” and unfortunately at that point any VPAT they get from me is going to contain some very bad news that could have been avoided had they gotten serious about accessibility much earlier in the product lifecycle. In fact, as early as possible: When the first commit is submitted to version control, and when the first pieces of content are submitted in the content management system. Testing must happen before deployment and before content is published.

Stay tuned for part 3 where I talk about critical capabilities for the next generation of testing tools.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Everything you know about accessibility testing is wrong (part 1)

My first experience with accessibility and, therefore, accessibility testing, came from Bobby.

In 1995, CAST launched Bobby as a free public service to make the burgeoning World Wide Web more accessible to individuals with disabilities. Over the next decade, Bobby helped novice and professional Web designers analyze and make improvements to millions of Web pages. This work won CAST numerous awards and international recognition.

CAST no longer supports the Bobby accessiblity testing software. Bobby was sold to Watchfire in 2004 which, in turn, was acquired by IBM in 2007.

Although Bobby is no longer available as a free service or standalone product, it is one of the tests included within the IBM Rational Policy Tester Accessibility Edition software, the comprehensive enterprise application for testing websites. (http://www.cast.org/learningtools/Bobby/index.html)

Bobby was so popular that the above URL remains the #4 result in Google for the word “Bobby”. My first experience with Bobby came in the form of a rejection email for a job application. In the early 2000s I was attempting to get a job as a web developer in the Washington DC area. At this time, Section 508 was new-ish and government contractors such as Northrop Grumman, Raytheon, Lockheed Martin, and the like were very focused on hiring web developers whose work was “508-compliant”. On one occasion, I got a response to my job application asking me to send over some work samples. I responded with a series of publicly available URLs showing off my work and in a few hours received an email saying that they would be unable to hire me because my work had failed a test by Bobby. Bobby, whoever or whatever it was, became the thing interfering with my ability to put food on the table. Unacceptable.

For my part, I did as I always do. I became obsessed with accessibility. Today when people ask me how I got interested in accessibility, I tell them the above story and tell them I have no “legitimate” reason for my interest. In other words, I don’t have a disability myself nor did any of my family members or friends. I don’t have any interesting back-story like Ted Henter, creator of JAWS. Instead, I’ve come to view accessibility as a quality-of-work issue. As a developer, the quality of my work has the direct ability to impact users’ ability to consume and interact with the content I create. To me, persons with disabilities are no different than anyone else using my site. All of the human rights stuff surrounding accessibility is purely ancillary. I’ve done a bad job if users have a hard time. My interest in accessibility is as simple as that.

Perhaps this is due to my being so new to accessibility at that time, but I view that time period fondly as one in which there were incredible opportunities to learn. Among the educational resources I discovered the best, by far, was the WebAIM.org Discussion List. The resources provided on the WebAIM website itself were immensely useful, but the active and friendly atmosphere of their discussion list was and remains the best community for those new to web accessibility. The list of active participants on that list is like a who’s who of accessibility. It didn’t take long before I noted that many of the more notable contributors to the community had a high level of disdain for automated testing tools. This disdain wasn’t altogether unfounded as documented by Jukka Korpela in 2002. In the long term, however, this disdain has created roadblocks to adoption & use of tools for accessibility testing and, in my opinion, has delayed development of newer and better tools for this task. The end result has been the development of tools and procedures that test the wrong thing at the wrong time and fosters an atmostphere of generalized resistance of tools in general.

Resistance to tools in general is somewhat justified

In the accessibility community, resistance to automated accessibility testing tools comes in two flavors: Those who say tools cannot provide complete coverage for all accessibility issues and those who say such tools take the focus off of the user and puts it on the system. Both of these reasons are born from perspectives that don’t fully understand the purpose of automated testing. Further, they fail to consider that although they’re both right that doesn’t actually negate the value of automated testing.

Evolutionary biologists and anthropologists cite two major reasons for mankind’s evolution into the dominant species on Earth: the use of tools and the taming of fire. Beginning with rudimentary stone tools, man’s first tools enabled easier access to food. We could hunt for, butcher, and prepare food more easily through the use of tools. The evolution of tools and technology isn’t unlike biological evolution of a species. Our opposable thumbs, control of fire, and larger brains form the basic trunk with various other aspects of tools and technologies forming the limbs of the tree and more specific tools technologies form the twigs. Like biological evolution, certain types of tools and technologies die off along the way, losing favor and being replaced with better tools and technologies.

Forge welding came about in the Bronze and Iron Ages and remained the dominant form of welding for thousands of years. In the Middle Ages forge welding techniques saw many improvements which remained in use for hundreds of years. In the early 1800s however, the discovery of the electrical arc revolutionized welding and brought about advances that would later evolve into SMAW (Shielded metal arc welding), commonly referred to as “stick welding”. Since the late 1800s and continuing today, newer, better, and safer methods of welding are continually being developed. Today this includes such high tech methods as laser beam welding and electron welding.

In all cases the tools and technologies we employ are aimed at one primary goal: accomplish a task more easily, efficiently, and with higher quality. In the earliest stages of tool development the tools were aimed at doing things we were already doing, but doing them better, such as using rocks to smash nuts or sharp flints to butcher meat. But in the Bronze and Iron Ages our goals were more ambitious. We aimed at doing things we could never accomplish without the tool. As tools and technologies continue to evolve, mankind’s goals remain the same: make things easier, faster, better and make the previously impossible become possible.

At a fraction of the size and fraction of the cost, the smartphone in your pocket holds more than 13,000 times as much data than the first hard disk, the IBM 350 RAMAC, created in 1956, which weighed over 1 ton and cost $10,000 (in 1956 money). Literally everything around us in the modern world is the result of technological evolution including, in all likelihood, the grass on your lawn. This fact is, quite frankly, why I’m so baffled by resistance to automatic accessibility testing. In any case where a capability exists which can replace or reduce human effort, it only makes sense to do so. In any case where we can avoid repetitious effort, we should seek to do so. Of course this isn’t always possible. Even in the previous example of welding, some jobs can’t sustain the expense of robotic welding. Perhaps the job is unique or the production run is too small to justify robotic welding. But that doesn’t prevent the creation of a Welding Procedure Specification and the use of a specific process to create the end product. Automatic accessibility testing is no different.

While there are a number of seemingly insurmountable challenges relating to accuracy or completeness of automated accessibility testing, that doesn’t mean such testing has no value. Automated testing can and should be utilized in a way that makes the tool’s user more efficient and effective at their job – namely, finding accessibility issues. For example, it is trivial for an automated tool to be programmed to find instances of images without alt attributes and therefore no human should ever have to spend time looking for those issues. However, machines are wholly incapable of the subjective interpretation of the meaning behind an image in context and therefore judging the quality of the text in an alt attribute is a task that does require a skilled human. This will probably always be the case, at least as long as the Web as we know it exists.

In other words, an automated tool can tell you when things are bad but cannot tell you when things are good. Additionally, the array of possible “bad” things a tool can reliably and accurately find is somewhat small. My own and others’ estimates suggest that only 25% of accessibility best practices can be tested definitively using automated testing. It is this lack of complete and reliable coverage that detractors of automated testing point to as evidence against the usefulness of automated testing on the whole. The somewhat epidemic issue of existing automated tools returning overly-conservative, irrelevant, or erroneous results only serves to strengthen these claims against automated testing. But this only makes the case against the specific tools, not against automated testing. The fact that automated accessibility testing tools have historically been prone to delivering bad results doesn’t mean good results are impossible.

Stay tuned for post #2 in this series where I discuss the challenges and proper use of tools for testing web accessibility.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

DIY Post: Replacing Portion of Wood Floors

Jennifer Groves and I purchased our house about 7 years ago. Most of the rooms in the house had carpet in them, though two bedrooms had exposed wood floors. Due to the age of the house we figured that if some of the rooms in the house had wood floor it was likely that all of them did. I peeled up the carpet in some corners and sure enough: hardwood. Then I removed all the carpet in the house. Unfortunately our hallway floors were too bad to sand and refinish. We had an aging dog who would sometimes be unable to hold her water. Whenever she’d have an accident it was often in the hallway. We would then clean up using our Hoover steamvac. Unfortunately the steam vac is incapable of getting up all of the water and so over time the wood underneath became stained. The only solutions at that point would be to either put down some carpet again or replace the wood. My choice was to replace the wood.

Replacing the wood involves of course that you first remove the wood you wish to replace.  This is far more difficult than it seems when removing just a portion of the floor.  If you’re removing all of the wood in the house, or at least on that level, then you can just rip out all the floor without any regard for whether you damage the other floor boards.  When replacing a portion of the floor you need to take care not to damage boards adjacent to that portion you’re replacing.

hallway with wood floors removed and bare sub floor exposed.

The two following images show why it is important to take care during removal.  Proper installation of wood floor involves staggering the boards. Proper replacement requires maintaining the staggering of the boards and therefore requires you to remove the boards entirely. This was pretty easy to do in the hallway where the boards went wall-to-wall.

View of doorway area of floor showing staggered boards

Unfortunately removing the entire board is impossible in the doorways.  Back when our house was built in 1956, the boards they used were 13 feet long. Since our hallway is only 6 feet wide, many of those boards traverse from one bedroom to the next, going through the hallway along the way. Removing the boards that went into the bedrooms often involved cutting them to remove them.

Another view of doorway. These boards are cut deeper into the room.

Finding a proper tool to remove these boards was surprisingly hard for me as I’d never done this type of work before. I experimented with a number of different ideas until I heard of the Fein Multimaster. Holy cow. If you have an oscillating tool and it isn’t the Fein Multimaster, you are wrong.  Prior to buying the Fein I had tried a battery-powered Craftsman oscillating tool and it was horribly inadequate at the task of cutting even one of the boards. It didn’t have the muscle or battery life necessary for the job.  Ultimately I had to make about 18 cuts, so the Craftsman tool clearly wasn’t going to cut it (pun intended). I returned the Craftsman tool, ordered the Fein from Amazon and once it was delivered I finished the cuts in about 45 minutes.

Closer doorway view showing the cuts.

The previous owners had, sometime in the past, experienced a pretty bad termite infestation and the board at the top of the steps required replacement. The existing board had been a 2-by-6, so I replaced it and beefed up portions of the under lying floor joists at the same time.

Showing the 2 by 6 board replaced.

The next step in the process is to lay down red rosin paper as an underlayment before laying down the floor.

Hallway subfloor covered by red rosin paper

The only problem with putting down the rosin paper is that now you don’t know where the floor joists are.  The plywood subfloor has nails on it which show where the floor joists are, but with the rosin paper down yo can’t see the nails.  The solution?  My magnetic stud finder.

Close-up of magnetic stud finder on top of red rosin paper

The way a magnetic stud finder works is by being attracted to nails in the stud. Since the subfloor was already nailed down all I had to do was use the same principle: find the nails with the stud finder!  I made marks wherever I found a nail.

Placing marks on the red rosin paper where the stud finder found a nail.

I followed that up with striking a line across those marks. When laying the floor boards, the boards will be nailed to the joists wherever the boards meet these lines.

View of hallway showing lines on the red rosin paper where the floor joists are

The next step is laying boards. In cases where you’re completely starting fresh you need to make sure your first board is straight and that it is square with the room. Note this isn’t square with the wall(s) because the walls may not actually be square.  Anyway, I decided to cheat by keeping the original “first” board since it wasn’t stained and could be finished.  I then started from that board and started nailing down floor.

First three boards nailed down.

You can either rent a flooring nailer or buy one.  Since I know I was actually going to replace more flooring I went ahead and bought one.  If you rent a nailer, your rental is for 24 hours at a time. After a couple of days of rentals you may as well have bought the darn thing.  So I did!

About half the hallway done.

Here’s the fully installed hallway floor!

All of the floor laid down.

Some commenters on Reddit disagreed with the way I dealt with the staggering of the boards, suggesting that the “wood will never match”. Here’s a view of the finished wood in one of the doorways. While the wood isn’t a 100% match, you really have to be looking for it in order to notice.
Finished view of doorway showing the match-up

What about price? All-in-all I wound up paying for the wood, the red rosin paper, the nails, the Fein tool, and the flooring nailer. I already owned the air compressor and other tools I needed. Despite needing to buy the tools, I saved about $200 from the quotes I had gotten from professional installers.

Stay tuned for two more posts on floor installing and re-finishing

Quick Tip: Text Characters as Visual Separators

I’ve been running into these pretty frequently lately so I figured I’d throw something together about it: text characters as visual separators.  As much as I’d like to say modern development practices have grown beyond such things, it apparently hasn’t.

<ul>
<li>Foo</li>
<li>|</li>
<li>Bar</li>
<li>|</li>
<li>Bat</li>
<li>|</li>
<li>Baz</li>
</ul>

The above code structure, obviously scrubbed to protect the guilty, was used to provide a visual separation between links in a website’s footer. Screen readers will likely announce the number of LI elements an then read the list. Depending on the specific screen reader and the user’s settings, it may or may not announce the pipe characters. If it doesn’t announce the pipe characters, users will still be expecting 7 items, not 4. In the grand scheme of things, this probably isn’t the worst thing you could do, but it also betrays a lack of skill, especially considering how super easy it is to do this right:


<ul>
<li>Foo</li>
<li>Bar</li>
<li>Bat</li>
<li>Baz</li>
</ul>

li{
    list-style-type: none;
    display: inline;
    margin-right: .3 em;
}
li:not(:last-of-type):after{
    content: ' | ';
}

See this at: http://jsfiddle.net/karlgroves/8wXa8/

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

US DOJ Intervenes in NFB et al. vs. HR Block Case

From the US Department of Justice’s ADA office:
The Justice Department announced today that it seeks to intervene in a lawsuit against HRB Digital, LLC and HRB Tax Group, Inc. (“Block”) in federal court in Boston to remedy violations of the Americans with Disabilities Act (ADA). The department’s proposed complaint in intervention in the lawsuit, “National Federation of the Blind v. HRB Digital LLC and HRB Tax Group, Inc.”, alleges that Block discriminates against individuals with disabilities in the full and equal enjoyment of its goods and services provided through www.hrblock.com.
The complaint is online at http://www.ada.gov/hrb-proposed-complaint.htm

The above message was posted to the WebAIM Discussion list yesterday, November 26, 2013 by Bevi Chagnon.

The history of Web Accessibility litigation is an interesting one, made more interesting by the fact that the first notable lawsuit regarding web accessibility took place 14 years ago and yet the DOJ has yet to issue specific requirements regarding the legal requirements of the public web. This lack of clarity has not stopped the somewhat steady stream of accessibility lawsuits in recent years. The DOJ has consistently stated that their position is that the Web is a place of public accommodation and that, by law, the ADA does apply. Effectively, their position has been that despite the lack of a revised Final Rule, all websites must be accessible. This latest lawsuit is just another in a series of recent actions where the DOJ is demonstrating this.

What does this mean?

As my friend and former co-worker Elle Waters likes to say, “Do you want to spend your money [on web accessibility] on your budget and timeline or on a budget and timeline dictated by someone else?” What is clear – and made more clear with each new lawsuit – is that reduction of risk makes a clear business case for accessibility.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Issue Reporting: How to ensure your issue report is successful

Issue tracking systems employed by development teams can often be configured in a variety of ways, but they almost always have the same basic configuration that asks for essentially the same information. When creating an issue in an issue tracking system, the following guidance should be followed to ensure that developers can quickly and accurately fix the issues reported to them.

Summary (may also be referred to as “title”) – This should be a high level description of the issue, including where it exists.  Because this field is usually a single line text input, it must be very concise and clear, such that anyone reading it can understand what the problem is immediately: “Registration form: Text input elements have duplicate ‘id’ attributes.”

Description – This would be where you’d enter a bit more long-form description of what this issue is about. In other words this provides the details on specifically what is the problem, where is the problem, why it is a problem, who is affected by it, how bad the problem is. Also include existing code and recommended changes to the code.  If specific code examples can’t be supplied, use a text description.  Always provide specific information on how to fix the issue. There’s no such thing as too much detail here as long as it is relevant to the specific issue you’re reporting.

For example:

Text inputs for “E-mail address” and “Desired Username” have the same value for their ‘id’ attributes. The value used for both ‘id’ attributes is “email”, but the “Desired Username” field’s label points to a nonexistent ‘id’ of “username”. This is likely a typo but will mean that users of screen readers may input the wrong text for one of these fields and may repeatedly fail validation while trying to figure out which field is which. All text inputs must have a <label> element with a ‘for’ attribute that matches the ‘id’ attribute value of the corresponding input. The ‘id’ attributes must be unique.

Existing Code:

<label for="email">Desired Username</label>
<input type="text" name="username" id="username" />
<label for="email">E-mail Address</label>
<input type="text" name="email" id="email" />

Recommended Code:

<label for="username">Desired Username</label>
<input type="text" name="username" id="username" />
<label for="email">E-mail Address</label>
<input type="text" name="email" id="email" />

Platform – This information is most needed if there are specific platform issues to consider during debugging. If you’re unsure whether that’s the case, just include it to be on the safe side. Include your operating system & version, browser & version, and assistive technology & version.

Steps to Reproduce – This should include what it takes to see the problem first hand.  This is really useful for stuff like focus management issues, but not so useful for basic stuff like form labels, alt text, or table headers which can be clearly articulated in the description, especially if you’re including code samples.

The following format is preferred:

Steps to reproduce:

  1. Go to Page X
  2. Activate the link labeled “Help”
  3. A dialog will appear titled “Get help from Y”

Expected Result:

  1. Focus would be shifted to the first actionable item in dialog
  2. Focus would remain in dialog only, until dialog is closed

Actual Result:

  1. Focus shifts to the first actionable item in dialog
  2. As they tab through, the user can gain focus on items outside of the dialog

By clearly and accurately describing the issue you’ve experienced, you can help the development staff understand the issue and guide them toward making a fix. Unclear, confusing, or incomplete issue reports only tend to delay repair while developers try to discover what the actual problem is. The better your report the more likely it will be fixed soon and accurately.

I also highly recommend reading Contacting Organizations about Inaccessible Websites

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Taking Accessibility to the Mainstream

For years I’ve been saying that accessibility people need to branch out to the mainstream. Preaching to the converted doesn’t really help much when it comes to increasing mainstream adoption of accessibility as a way of thinking. Developers typically don’t have any depth of understanding when it comes to accessibility. Developers need to know … practical ways of using and having users benefit from accessibility techniques but how can they get that information if we (the accessibility community) aren’t aggressively trying to get it out there? For this reason, I’ve been trying to do exactly that. As a speaker and writer, I’ve been trying to reach out to more and more mainstream audiences. Conferences like CSUN and the various accessibility camps are great, but the ability to impact mainstream development communities will only happen by taking the message directly to those communities instead of waiting for them to come to you.

So, how has it gone? So far, it has been hit & miss. For instance, I’ve been attempting to talk at the local U(x)PA chapter for years (since 2006, in fact) with no success and other IA/ UX organizations and meetups have gone much the same. The same goes for local Refresh Groups and developer meetups. At the same time, I’ve been honored to take part in events like the Digital 360 Conference and ParisWeb both of which have a general development focus and include accessibility as part of the main conference. While the rejections (or non-response, as the case often is) can be frustrating, the successes are super-fulfilling. For instance:

J’avoue que certaines conférences sur l’accessibilité à #parisweb m’ont ouvert un peu les yeux. Surtout celle de Karl Groves
Nicolas Florian

Roughtly translated by Google Translate, the above says “I admit that some conferences on accessibility # ParisWeb opened my eyes a little. Especially that Karl Groves”. If I can impact even one person this way every time I speak, all of my efforts will be worth it. The frustration of rejection and non-response are washed away every time someone walks away with new insight and knowledge regarding accessibility. I plan on continuing to press for more inclusion of accessibility in mainstream development circles because of this and hope all of you will join me.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

The End User Uber Alles or, you got your reality in my idealism

As a web developer, one of the biggest sources of frustration is developing a website that works across the wide array of user agents and operating systems the visitor may be using. The web standards movement was supposed to "fix" that. It did make good progress and then CSS3, HTML5, and HTML5 multimedia and mobile devices sort of tripped that up. Luckily there are people out there creating tools and techniques to help us keep up and find workarounds for things like spotty support for new features. This reduces frustration somewhat. That is, until you add accessibility into the mix. Then you have to add assistive technologies onto your list of concerns. In addition to understanding the various differences in browser support for CSS properties, HTML elements, and DOM methods/ properties, you now have to concern yourself with the differences between types, brands, and versions of different assistive technologies. Like browser vendors, assistive technology vendors each have their own direction, their own goals and plans that may not agree with other vendors of similar products. Like browser vendors, assistive technology software also sometimes have bugs and missing or incomplete features. Like browsers, developers must make an informed and rational decision regarding which assistive technologies, brands, and versions they plan on supporting. Just as you shouldn’t be expected to support all versions of all browsers on all operating systems, you also shouldn’t be expected to support all versions of all assistive technologies on all operating systems.

None of this would be necessary if everyone just followed standards. Sadly that’s no more the case for assistive technologies than it is for browsers. I was reminded of this today, in fact, when doing some research on text alternatives for images. The way text alternatives should behave is described in The Roles Model – 5.2.7.3. Text Alternative Computation from WAI-ARIA. This describes "how user agents acquire a name or description that they then publish through the accessibility API" and it provides an order-of-precedence for the various bits of programmatically determinable text that can be used. The order of precedence should be:

  1. aria-labelledby
  2. aria-label
  3. alt attribute
  4. title attribute

What I found was a bit different and inconsistent across assistive technologies. This is definitely not a complete list of all scenarios or assistive technologies. There are also complex interplays between different browsers and Accessibility APIs, but it still illustrates the point that relying on the ideal behavior is as unrealistic when it comes to assistive technologies as it is when it comes to browsers.

When it comes to making design and development decisions, how likely are you to implement new HTML5 or CSS3 features without fallbacks? How likely are you to say, "Well, screw IE users for not supporting the <meter> element. I’m using it anyway!" Not at all, right? But you might be willing to come to that conclusion when it comes to box-model issues on IE6. These are the same types of considerations to put into assistive technology support. We must aim our efforts at the goal of achieving the greatest common good.

Thinking about this reminded me of the HTML Design Principles. "The principles offer guidance for the design of HTML in the areas of compatibility, utility and interoperability. "

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity. In other words costs or difficulties to the user should be given more weight than costs to authors; which in turn should be given more weight than costs to implementors; which should be given more weight than costs to authors of the spec itself, which should be given more weight than those proposing changes for theoretical reasons alone. Of course, it is preferred to make things better for multiple constituencies at once.
HTML Design Principles – Priority of Constituencies

This sort of sentiment could be used more than in just the design principles of the HTML5 specification. This should be how we handle all design and development topics – with the user first. The goal of any web design and development effort should be to ensure that we use HTML and CSS features in a way that the broadest reasonable range of people can use the system. Sometimes that will mean that users of certain brands or versions of assistive technologies will have a less-than-ideal experience. Some users, who choose to use software that doesn’t support modern technologies need to understand that their decision to use such software is impacting their enjoyment and productivity on the web. Other users may need to address their issues directly with the makers of their assistive technologies.

This does not absolve us developers from the responsibility of providing the broadest possible usability for all. That was the original reason for my research yesterday into text alternatives. What I found, in this particular cases, is that although there are multiple ways of providing a valid text alternative that should be supported by all modern screen readers, there’s only one approach that ensures usability for all users on all assistive technologies for all browsers on all operating systems. The tried-and-true ‘alt’ attribute. Modern browsers and assistive technologies can use aria-label or aria-labelledby, but such an approach doesn’t provide support for Voice Dictation Software or software/ user-agents that don’t support WAI-ARIA.

These are the types of decisions we make every day as web developers. Do we use the new shiny, or the tried-and-true? We need to answer that question from the user’s perspective, not from the development perspective – with the user taking preference.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Some thoughts on automated web accessibility testing

My feelings about automated accessibility testing have vacillated throughout my career. My introduction to accessibility was through automated testing. As a new web developer I began applying to jobs with US government contractors shortly after 508’s grace period ended. I was rejected several times because my work failed a test by Bobby, the most popular accessibility tool at the time. Later, as I got to understand more about accessibility and the amount of subjectivity in determining what-is-or-is-not accessible, I came to believe that only a human can perform this function. The pragmatic reality is actually somewhere in between.

I feel like we live in an exciting time in history where the technologies of the Web are evolving more rapidly than I’ve ever seen. The many new types of web-enabled devices, many uses employed for web-based technologies, and many new and emerging input and consumption modalities present numerous challenges for developers and organizations. Luckily it seems that there are plenty of enthusiastic and creative people attempting to address those challenges by creating toolsets, frameworks, and processes to make development quicker, easier, and more robust. One specific area that is particularly interesting is in automated testing and automated build systems and frameworks. Some systems, such as Jenkins can handle automated testing and building. Some companies have built their own systems, using numerous open source projects that exist for such testing, making it easy for any development team to take advantage of automation.

At the core of this is the DRY principle – Don’t Repeat Yourself. In programming, DRY is all about not duplicating functionality, because doing so adds to maintenance issues, logic conflicts, refactoring issues and, when left unchecked, ultimately a great deal of entropy. But DRY also applies to human effort, processes, and overall efficiency. We see demonstrations of the DRY principle everywhere in history going back to the invention of metal casting around 3500BC. The idea of creating a process that allows us to do something one time and having a way to duplicate that process repeatedly just makes sense. It also just makes sense to have a process or tool which will do something entirely differently but have the same outcome. To this day, workers on the line at General Motors auto plants who come up with a newer, faster, better, or safer way of building a car get a hefty bonus, depending on how well they improve the process. This has resulted in numerous innovations in safety or efficiency. This is why automated web accessibility testing makes so much sense to me.

Those who are quick to criticize automated web accessibility testing may not understand this idea of not repeating effort. Or, maybe they disagree with the idea that automated testing is effective. I believe that these feelings are well-founded because the toolsets they currently have at their disposal have made them feel that way. If the idea of automation is to replace human effort and the tool actually adds more effort or if the tool doesn’t do its job properly, then the natural conclusion is that automation is a bad idea.

In the manufacturing of physical products, the final manufactured products are tested to ensure a specific level of performance against pre-defined criteria. In some cases, the criteria are internally developed. In others cases, such as electronic products, the standards are external. Depending on the type of product the standards often include things like safety, reliability, accuracy, longevity, robustness, and sustainability. Naturally, testing and measuring equipment are tested for things like accuracy and sensitivity.

From a business perspective, the manufacturing processes themselves are scrutinized as well to ensure that the process is safe, accurate, reliable, and efficient as possible. In terms of efficiency, the business wants to make sure that the process is cheap and quick. Delivering a good product that’s cheap to make and sells for a lot is every manufacturer’s dream.

What do we make of the above few paragraphs? Whatever process we employ, we need to make sure that the process is fast, reliable, and accurate. The end result must be something that delivers high value. But what do we see when it comes to automated web accessibility testing tools? Often we see the opposite. Often the tools add a layer of complexity and a disruption in process that makes actually using the tool more trouble than it is worth. Reports are often littered with false positives that have little relevance or importance. I’ve been discussing problems with automated website accessibility testing tools for years – even issuing challenges to vendors to improve the usefulness and usability of their products. The reality is that the present model for automated testing suites like Worldspace, Compliance Sherriff, AMP, etc. is fundamentally broken. They will never eliminate the ways in which they disrupt the modern web development and QA workflows.

This does not, however, mean that automated web accessibility testing itself is bad. On the contrary. I believe strongly in it and my belief gets stronger every time I think about it. At last year’s CSUN, I presented a session titled Choosing an Automated Accessibility Testing Tool: 13 Questions you should ask. I now think that there are really only three things that warrant consideration:

  1. Flexibility and Freedom. These tools exist for one reason only: to work for you. Every single second you spend learning the UI, figuring out how to use it, or sifting through results is a second of development time that could go into actually improving the system. The tool must be flexible enough to offer the freedom to use it in any environment by any team of any size in any location.
  2. Accurate. Developers of accessibility tools want to find every single issue they possibly can. This unfortunately often includes things that are, in context, irrelevant or of low impact. They also may require an additional amount of human effort to verify. This is a disservice. Tool vendors need to understand that their clients may not have the time, budget, or knowledge to sift through false positives. The user should be able to immediately eliminate or ignore anything that isn’t an undeniable issue.
  3. Understandable The user of the tool needs to know exactly what the problem is, how much impact the problem has, where it is, and how to fix it. Overly general, overly concise, or esoteric result descriptions are unhelpful. If a tool can find a problem, then it can make intelligently informed suggestions for remediation

Those who are anti-automated testing probably don’t fully have the perspective needed to understand how useful it can be. The primary activity I perform at my job is to do accessibility audits. Even to this day, 14 years after WCAG 1.0 gained Full Recommendation status, I still find easy-to-discover issues like missing alt attributes for images, missing form field labels, and missing header cells for tables. These are things that are super easy to accurately find using automation. In fact today I tested a page that had 1814 automatically detected errors with zero false positives! The detractors against automated testing are very right about one thing: There’s only so much you can do with automated testing. There are a large number of things that are too subjective or too complex for machine testing. But this doesn’t mean that automated accessibility testing has no value. It means that we need to use it wisely. We need to apply the tool at the right time and for the right reasons. We need to understand its limitations and capabilities and use them in a manner that exploits the capabilities as efficiently as possible. At that point, automation will be viewed as the vital part of the developer’s toolset that it should be.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343