Karl Groves

Tech Accessibility Consultant
  • Web
  • Mobile
  • Software
  • Hardware
  • Policy
Telephone
+1 443.875.7343
Email
karl@tenon.io
Twitter
@karlgroves

WTF-ARIA?!?

Recently, I saw someone Tweet that “…ARIA should be last” when working to make a website accessible. As you learn in Logic 101, generalized statements are particularly false. Such a broad statement, though mostly correct at least in spirit, is wholly incorrect in certain situations. ARIA is clearly the right choice in cases where native semantics do not exist at all or are poorly supported. Still, there are some parts of ARIA that I think are just plain silly and ill-advised – namely roles which are intended to behave exactly like elements in native HTML.

role=button

There used to be a time when creating pseudo-buttons, like a link styled to look like a button, made sense. Styling the <button> element was incredibly difficult. These days that’s not the case. As I understand it, any browser that will support the ‘button’ role will also reliably support CSS on the <button> element, making the use of this role pretty silly.

role=heading

I’m completely unable to find a use case for the ‘heading’ role. The heading role, as the name implies, can function as a substitute for <h1>, <h2>, etc. and the WAI-ARIA spec says If headings are organized into a logical outline, the aria-level attribute can be used to indicate the nesting level. In other words, you could do something like this:

<div role='heading' aria-level='3'>Foo</div>

I cannot imagine a scenario where this is at all a suitable alternative to HTML’s native heading elements. It is far more markup  than necessary and, I suspect, more prone to errors by uninformed devs.

role=link

This is another role that is ripe for developer error. Actual links – that is, an <a> element with an href attribute pointing to a valid URI – have specific methods and properties available to them, as I described in an earlier post titled Links are not buttons…. Adding a role of ‘link’ on something that is not a link now requires you to ensure that your code behaves the same way as a link. For instance, it should be in the tab order, should react to the appropriate events via keyboard, and that it actually navigate to a new resource when acted upon. These are all things an actual, properly marked up link can do, making this role silly as well.

role=list / role=listitem

Given the WAI-ARIA descriptions of the list Role and listitem Role I can’t see anything that these roles offer that can’t be handled by plain HTML. The latter is described as A group of non-interactive list items while the latter is A single item in a list or directory. In other words, these things are the same as a regular ole HTML list.

role=radio

The radio Role is A checkable input in a group of radio roles, only one of which can be checked at a time. Of all of the roles listed here this is the only one I could justify using. Unlike all of the other roles listed, the native element this replaces cannot be styled with much flexibility. It is infinitely more easy to style something else and give it a role of ‘radio’. At the same time I must admit to wondering: Why? At the risk of sounding like I’m against “design”, it just doesn’t seem worth it to forego the reliability of a native control just for visual styling purposes. There are several JavaScript libraries, jQuery plugins, or whole front-end frameworks aimed at the styling of forms and almost universally they fail to meet accessibility requirements in at least one of the following ways.

  • The design itself has poor contrast
  • The styling doesn’t work in Windows High Contrast Mode
  • The styling would be incompatible with user-defined styles
  • The custom elements are not keyboard accessible or, at least, visual state change doesn’t work via keyboard

In the case of custom radio buttons, merely adding a role of ‘radio’ is not enough and the costs of doing it right should be strongly considered against the reliability and robustness of just using native radio buttons.

Why ARIA?

All though the roles discussed above are, in my opinion, just plain silly in HTML, WAI-ARIA wasn’t created just for making HTML documents accessible. Ostensibly, it can be used for any web content and, in fact, the role attribute was added to SVG Tiny 1.2 all the way back in 2008. SVG would otherwise have no way of exposing the same name, state, role, and value information without ARIA and it has been incorporated directly into SVG 2.
Meme: ARIA All The Things!
So on the topic of “Use ARIA first” vs. “Use ARIA last”, neither is right. The right answer is to use ARIA whenever ARIA is the best tool for the task at hand. That might be for a progressive enhancement scenario when the user’s browser doesn’t support a specific feature, or to enhance accessibility under certain use cases, or to create an accessible widget that doesn’t exist in native semantics. Blanket statements don’t help, but constructive guidance does.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

The little button that could

The original issue

A link is used to perform the action of a button. The code below is used to show a region of the page which is hidden by default. Screen readers will read this code as a link and expect that it will navigate. Instead, upon activating this link, focus remains on the link and performs an action of a button.

<a id="x_123" class="niftyNuBtn" href="javascript:;">Do Something</a>

As a consequence, we recommend using an actual BUTTON element:

<button id="x_123" class="niftyNuBtn">Do Something</button>

The response

We can’t use a BUTTON because it would not look right with our CSS. Our stylesheets all reference this as a.niftyNuBtn. Why is this a problem anyway?

The follow-up

Well, there are two primary issues, the first of which is admittedly a little about semantic purity in that a button is a button and a link is a link. But there’s a bit more to it: users who can see an affordance which looks like a button will intuit immediately that it will (or should) behave like a button. And, were it to look like a link, they would intuit that it is a link. For a user who cannot see, or whose vision is very poor, may be using an assistive technology which reads out the properties of an underlying object. In short, a BUTTON will be announced via text-to-speech as "button". A button’s behavior and a link’s behavior are distinctly different – a button initiates an action in the current context whereas a link changes the context by navigating to a new resource. In order to meet users’ expectations of how this affordance will perform, it should be a button.

The follow-up’s response

Our engineer said we can use WAI-ARIA for this. He said that we can give this thing a button role which will mean that JAWS will announce this as a button and that will alleviate your earlier concerns. So, how about this:

<a id="x_123" class="niftyNuBtn" role="button" href="javascript:;">Do Something</a>

Almost there, I think

Yes. This will cause aria-supporting assistive technologies to announce this link as a button. Unfortunately, there's the issue of focus management and this impacts more than just users who are blind. A link is understood to change focus to a new resource. Buttons may or may not change focus, depending on the action being performed. In this specific button's case, focus should stay on the button. At first glance, you may think that this pseudo-button is doing what it needs to be doing because you're keeping focus on the button when the user clicks it. That's true. What's also true is focus stays on it when you hit the enter key, which is also fine. Unfortunately, activating it with the spacebar causes the page to scroll. Users who interact with their computer using only the keyboard will expect that they can activate the button with the spacebar as well. Overall the best option is to just use a button.

Digging in

Crap, you're right. Our engineer added the button role and everything was great, but then I hit the spacebar and the page scrolled! How do we stop this?!?

Prevent Default

Actually, stopping the scrolling is pretty easy. You can use event.preventDefault() like so:
$('.niftyNuBtn').on('click, keypress' function(event){
        if(event.type === 'click'){
            customFunctionStuff();
        }
        else if(event.type === keypress){
            var code = event.charCode || event.keyCode;
            if((code === 32) || (code === 13)){
                customFunctionStuff();
                event.preventDefault();
            }
        }
});

Keep in mind, you'll need to do this event.preventDefault(); on every instance where you have code that acts like a button.

Acceptance

Turns out we've decided to use a button. All we needed to do was change a few CSS declarations. Thanks so much for the help.

Note: no, this isn't from a real client but actually reminiscent of multiple situations.

I'm available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Everything you know about accessibility testing is wrong (Part 4)

…how many bigger issues have we missed wasting our time fixing this kind of crap? @thebillygregory

Literally every single audit report I’ve ever done includes issues relating to the following:

  • Missing alt attributes for images
  • Missing explicit relationships between form fields and their labels
  • Tables without headers or without explicit relationships between header cells and data cells

I also frequently find these others

  • Use of deprecated, presentational elements and attributes
  • Events bound to items that are not discoverable via keyboard
  • Poor color contrast
  • Blank link text
  • Missing/ inaccurate/ incomplete name, state, role, and value information on custom UI widgets

The sheer volume of these types of errors is, to put it lightly, frustrating. In fact, the title of my presentation “What is this thing and what does it do” is actually born from an inside joke. During one audit where the system I was testing was particularly bad, I joked to some coworkers that analyzing the code was a bit like a game to figure out, “what is this thing and what does it do?”. I only later decided to put a positive spin on it.

As I mentioned in the previous post in this series, there are an average of 54 automatically detectable errors per page on the Internet. The thing about automated testing is that, even though it is somewhat limited in the scope of what it can find, some of the errors it does find are pretty high impact for the user. Think about it: missing alt text for images and missing labels for form fields are a huge impact for users. While the total amount of accessibility best practice that are definitively testable by automated means are small, they tend to have a huge impact in whether people with disabilities can use the system.

Automatically detectable issues should never see the light of day

The reason why some people are against automated testing is that for such a long time we in the accessibility world haven’t really understood where the testing belongs. People have long regarded the applicability of automated accessibility testing as being a QA process and, even worse, it often exists as the only accessibility-related QA testing that occurs. If your approach to accessibility testing begins and ends with the use of an automated tool, you’re doing it wrong. This concept of automated-tool-or-nothing seems at times to be cooperatively perpetuated both by other tool vendors and by accessibility advocates who decry automated testing as not effective. We must turn our back – immediately and permanently – on this either-or mentality. We must adopt a new understanding that automated testing has an ideal time & place where it is most effective.

Automated accessibility testing belongs in the hands of the developer. It must be part of normal development practices and must be regarded as part of the workflow of checking ones’ own work. All developers do basic checking of their work along the way, be it basic HTML & CSS validation, or checking that it displays right across browsers. Good developers take this a step further, by using code inspection tools like JSLint, JSHint, PHP Mess Detector, PHP_CodeSniffer and the like. In fact, IDEs like WebStorm, Netbeans, Aptana, and Eclipse have plugins to enable developers to do static code analysis. Excellent developers perform automated unit testing on their code and do not deploy code that doesn’t pass. What prevents accessibility from being part of this? Existing toolsets.

The revolution in workflow that will change accessibility

Last week I created a new wordpress theme for this site. I’m not the world’s best designer, but I hope it looks better than before. I created it from scratch using my Day One theme as a base. It also includes FontAwesome and BootStrap. I use Grunt for managing a series of tasks while I built and modified the template’s design:

  • I use grunt-contrib-sass to compile 11 different SASS files to CSS
  • I use grunt-contrib-concat to combine my JS files into one JS file and my CSS files into one CSS file
  • I use grunt-contrib-uglify to minify the JS file and grunt-contrib-cssmin to minify the CSS file
  • I use grunt-uncss to eliminate unused CSS declarations from my CSS file.
  • I use grunt-contrib-clean to clear out certain folders during the above processes to ensure any cruft left behind is wiped and that the generated files are always the latest & greatest
  • I use grunt-contrib-jshint to validate quality of my JS work – even on the Gruntfile itself.
  • I use grunt-contrib-watch to watch my SASS files and compile them as I go so I can view my changes live on my local development server.

All of my projects use Grunt, even the small Ajax Chat demo I’m giving at CSUN. Some of the projects do more interesting things. For instance, the Ajax Chat pulls down an external repo. Tenon automatically performs unit testing on its own code. When something goes wrong, Grunt stops and yells at you. You can even tie Grunt to pre-commit hooks. In such a workflow nothing goes live without all your Grunt tasks running successfully.

Imagine, an enterprise-wide tool that can be used in each phase, that works directly as part of your existing workflows and toolsets. Imagine tying such a tool to everything from the very lowest level tasks all the way through to the build and release cycles and publication of content. That’s why I created Tenon.

While Tenon has a web GUI, the web GUI is actually a client application of the real Tenon product. In fact, internally Asa and I refer to and manage Tenon as a series of different things: Tenon Admin, Tenon UI, and Tenon (the API). The real deal, the guts, the muscle of the whole thing is the Tenon API which allows direct command line access to testing your code. This is fundamental to what we believe makes a good developer tool. When used from the command line Tenon can play happily with any *nix based systems. So a developer can open a command prompt and run:

$ tenon http://wwww.example.com

and get results straight away.

By using Tenon as a low level command it becomes possible to integrate your accessibility testing into virtual any build system such make, bash, ANT, Maven etc. As I mentioned above, one possibility is to tie Tenon to a git pre-commit hook, which would prevent developer committing code which could not pass Tenon’s tests. Like JSHint, you can customize the options this to match your local development environment and level of strictness to apply to such a pre-commit hook.

A typical workflow with Tenon might look a bit more relaxed for say a front-end developer working on a CMS and using Grunt to compile SASS to CSS and minify JS. As a node.js module we will be introducing a grunt plugin. So once grunt-tenon is introduced into your Gruntfile.js file, you can add grunt-contrib-watch to watch your work. Every time you save, your front-end will perform your normal Grunt tasks and test the page you’re working on for accessibility.


Processing: http://www.example.com
Options: {"timeout":3000,"settings":{},"cookies":[],"headers":{},"useColors":true,"info":true}
Injecting scripts:

>>  /var/folders/mm/zd8plqb15m38j4dzf3yf9pjw0000gn/T/1394733486320.647
>>  client/assets.js
>>  client/utils.js

-----------------
RESULTS SNIPPED FOR BLOG POST
-----------------
Errors: 10
Issues: 10
Warnings: 0
Total run time: 3.27 sec

The same Gruntfile can also be run on your Jenkins, Travis-CI or Bamboo build server. Let’s say we’re using Jira for bug tracking and have it connected to our Bamboo build server. A developer on our team makes an accessibility mistake and commits that mistake with a Jira key — ISSUE-1234 — into our repo. As part of the Bamboo build, Tenon will return the test results in JUNIT format. The Bamboo build will fail and we can see in Jira that the commit against ISSUE-1234 was the cause for the red build. It will link directly to the source code in which the error originated. Because were using a CI build system from our developers standpoint all this can happen many times a day without requiring anything more than a simple commit!

Proper management of accessibility necessitates getting ahead of accessibility problems as soon as possible. Effectively there is no place before the code is committed. As a pre-commit hook or, at least, as a Grunt task before committing, accessibility problems are caught before they’re created. Automated testing is not the end, but the beginning of a robust accessibility testing methodology.

The next post is last post in this series, where we’ll put it all together.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Looking forward to CSUN 2014

I’m currently wrapping up the rest of my work for the week and getting ready for the annual pilgrimage to San Diego for the annual International Technology and Persons with Disabilities Conference, otherwise known as “CSUN”. Unlike previous years, I have relatively few presentations. I’m glad about that, really, because it means I can spend more time meeting people. If this is your first year at CSUN, you should read John Foliot’s CSUN For Newbies.

Preconference Workshop

On Tuesday, March 18, 2014, at 1:30 PST Billy Gregory and I will be assisting Steve Faulkner and Hans Hillen in a Pre-Conference Workshop titled “Implementing ARIA and HTML5 into Modern Web Applications (Part Two)”.

My Presentations

  1. Thursday, March 20, 2014 – 3:10 PM PST
    Roadmap For Making WordPress Accessible WordPress Accessibility Team members demonstrate progress and challenges and a roadmap for making WordPress accessible. Location: Balboa B, 2nd Floor, Seaport Tower
  2. Friday, March 21, 2014 – 1:50 PM PST
    No Beard Required. Mobile Testing With the Viking & the Lumberjack – Testing Mobile accessibility can be as daunting as it is important. This session will demystify and simplify mobile testing using live demonstrations and straightforward techniques. Location: Balboa A, 2nd Floor, Seaport Tower

Demonstrations of Tenon

If you’re interested in finding out more about Tenon, email me or just stop me in the hall and I’ll give you a demo.

If you’re going to CSUN I want to meet you

I love CSUN’s family-reunion-like atmosphere and getting to catch up with the many people I already know. But what I like more is meeting people I hadn’t already met. If you’re new to accessibility or we just don’t know each other yet, please just walk up and say hello. This is how I met many of the people I count among my best friends in accessibility!

Something more formal?

If you want to set up something more formal, especially for a one-on-one conversation, I strongly recommend emailing me directly. Typically what happens is that something intended to be a simple informal one-on-one get together winds up being a big group outing, so if you want to set up a private time to talk, here are some ideas.

  • Morning – I’m available all week before 8am. I’m open Tuesday and Thursday before 9.
  • Afternoon – As the day gets later, openings get more scarce. I’m currently open for lunches all week.
  • Evening – Evenings are often filled with impromptu group activities, so I won’t schedule something during the evening.

So, given the above, email me at karl@karlgroves.com to set something up!

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Everything you know about accessibility is wrong (Part 3)

In the previous post in this series, I ended with a discussion that “current automatic accessibility testing practices take place at the wrong place and wrong time and is done by the wrong people” but really this applies to all accessibility testing. Of course every organization is different, but my experience substantiates the statement quite well. The “…by the wrong people” part is especially true. The wrong people are QA staff.

While QA practices vary, one nearly universal trait among QA staff is that they lack any training in accessibility. Further, they often lack the technical skill necessary to skillfully decipher the reports generated by automated tools. When you combine their inexperience in both accessibility and development, you’re left with significant growing pains when you thrust an automated testing tool at them. As I’ve said in previous posts, these users will trust the automated tool’s results explicitly. Regardless of the quality of the tool, this increases the opportunity for mistakes because as I’ve said in previous posts, there are always limitations to what can be found definitively and very likely that some interpretation is needed. There are also things that are too subjective or too complex for an automated tool to catch.

Irrespective of tool quality, truly getting the most out of an automated web accessibility tool requires three things:

  • Technical knowledge in that which is being tested
  • Knowledge and understanding of the tool itself
  • Knowledge around accessibility and how people with disabilities use the web

The first two points above apply to any tool of any kind. Merely owning a bunch of nice tools certainly hasn’t made me an expert woodworker. Instead, my significant expense in tools has allowed me to make the most of what little woodworking knowledge and skill I have. But, if I had even more knowledge and skill, these tools would be of even more benefit. Even the fact that I have been a do-it-yourselfer since I was a child helping my dad around the house only helps marginally when it comes to a specialized domain like fine woodworking.

The similar lack of knowledge on the part of QA staff is the primary reason why they’re the wrong users for automated testing tools – at least until they get sufficient domain knowledge in development and accessibility. Unfortunately learning-by-doing is probably a bad strategy in this case, due to the disruptive nature of erroneous issue reports that’ll be generated along the way.

So who should be doing the testing? That depends on the type of testing being performed. Ultimately, everyone involved in the final user-interface and content should be involved.

  • Designers who create mockups should test their work before giving it to developers to implement
  • Developers should test their work before it is submitted to version control
  • Content authors should test their work before publishing
  • QA staff should run acceptance tests using assistive technologies
  • UX Staff should do usability tests with people with disabilities.

At every step is an opportunity to discover issues that had not been previously discovered, but there’s also a high likelihood that as the code itself gets closer and closer to being experienced by a user that the issues found won’t be fixed. Among the test opportunities listed above, developers’ testing of their own work is the most critical piece. QA staff should never have functional acceptance tests that fail due to an automatically-detectable accessibility issue. Usability test participants should never have a failed task due to an automatically-detectable accessibility issue. It is entirely appropriate that the developer take on such testing of the own work.

Furthering the accessibility of the Web requires a revolution in how accessibility testing is done

Right now we’re experiencing a revolution in the workflow of the modern web developer. More developers are beginning to automate some or all of their development processes, whether this includes things like dotfiles or SASS / LESS or the use of automated task runners like Grunt and Gulp. Automated task management isn’t the exception on the web, it is the rule and it stems from the improvement in efficiency and quality I discussed in the first post in this series.

Of the top 24 trending projects on Github as of this writing:

  • 21 of them include automated unit testing
  • 18 of them use Grunt or Gulp for automated task management
  • 16 of them use jshint as part of their automated task management
  • 15 of them use Bower for package management
  • 15 of them use Travis (or at least provide Travis files)
  • 2 of them use Yeoman

The extent to which these automated toolsets are used varies pretty significantly. On smaller projects you tend to see file concatenation and minification, but the sky is the limit, as evidenced by this Gruntfile from Angular.js. The extensive amount of automated unit testing Angular does is pretty impressive as well.

Myself and others often contend that part of the problem that exists with accessibility on the web is the fact that it is seen as a distinctly separate process from everything else in the development process. Each task that contributes to the final end product impacts the ability for people to use the system. Accessibility is usability for persons with disabilities. It is an aspect of the overall quality of the system and a very large part of what directly impacts accessibility is purely technical in nature. The apparent promise made by automated accessibility testing tool vendors is that they can find these technical failings. Historically however, they’ve harmed their own credibility by being prone to the false positives I discussed in the second post in this series. Finding technical problems is one thing. Flagging things that aren’t problems is another.

Automated accessibility testing can be done effectively, efficiently, and accurately and with high benefit to the organization. Doing so requires two things:

  • It is performed by the right people at the right time. That is that it be done by developers during their normal automated processes.
  • The tools stop generating inaccurate results. Yes, this means that perhaps we need to reduce the overall number of things we test for.

It may seem somewhat non-intuitive to state that we should do less testing with automated tools. The thing is, the state of web accessibility in general is rather abysmal. As I get ready for the official release of Tenon, I’ve been testing the homepage of the most popular sites listed in Alexa. As of this writing, Tenon has tested 84,956 pages and logged 1,855,271 issues. Among the most interesting findings

  • 27% of issues relate to the use of deprecated, presentational elements or attributes
  • 19% of issues are missing alt attributes for images
  • 10% of issues are data tables with no headers
  • 5% of issues relate to binding events to non-focusable elements.
  • 2% of issues relate to blank link text (likely through the use of CSS sprites for the link)

85,000 tested pages is statistically significant and has a high confidence interval. In fact, it is more than enough.

There are an average of 54 definitively testable issues per page on the web. These are all development related issues that could be caught by developers if they had tested their work prior to deployment. Developers require the availability of a toolset that can allow them the ability to avoid these high-impact issues up front. This is the promise of Tenon.

In Part 4 I’ll talk about our need to move away from standalone, monolithic toolsets and toward integrating more closely with developers’ workflows

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Woodshop tour

I posted this to Facebook but wanted to share on my site, too. This is where I spend my weekends when it is cold outside:

Note: alt attribute on each image is blank. Visible text under the image describes the image.


Looking into the entrance-way. Drill press and lathe straight ahead. Chalkboard paint along the left wall. Dust collection system viewable overhead. Pipe clamps clamped to support beam.


View from right past the entrance-way. Glue-ups in progress on my work table. There’s no such thing as too many clamps. Ahead and to the left you can see the band saw. To the left of that is my crappy little router table. Dust collection system also in view.


View straight ahead past entrance way. Drill press right in front. Just past that is the lathe with a rosewood bowl blank mounted. Further ahead is my new grinder. Various supplies are on the shelves along the wall. The lower shelf is all finishing supplies such as wipe-on poly, glue, sandpaper, etc. while the upper shelf is mostly misc. On the floor ahead are various scraps of wood. Most scraps are thrown away but I occasionally save stuff that may be useful later, such as for experimenting with a joint before doing the final piece.


View of the side of the room that has the drill press and lathe. The lathe is a 42-inch Delta Rockwell. Right behind the lathe is a dust collection box. Unfortunately my dust collector doesn’t have enough horsepower to make the box useful. On top of the dust collection box is a DIY air filter powered by a high-powered computer fan. To the left of that is another box that holds drills and various drill-related stuff like a Kreg jig, drill bits and forstner bits. Not shown: On this side, the workbench is actually a cabinet. Inside the cabinet is 6 glass carboys fermenting beer.

woodshop5
Corner of the room by the lathe. Parts bin on the wall. Shelves with finishing supplies, sharpening supplies, and sanding supplies as well as two grinders on the bench. Dust collection hoses along the top.

Dust collection hoses are also prominent in this picture as is the band saw.

Back wall showing 12-inch compound miter saw. Behind that is pegboard wall holding various tools. Hammers, chisels, screw drivers, files, pliers, and more. A shelf holds various router-related items.


Right corner of the back wall, from a little greater distance. Shows router table, router bits and various router related items. This is also where hand saws are stored as well as safety related stuff like safety glasses, face shield, and air masks. Underneath the router table is a wet-dry vac. It doesn’t get much use now that I have the dust collector, but this is such a good place to store it. On the side of the work bench is a pencil sharpener.

woodshop8
The “back room” of the shop holding more than 200 board-feet of walnut, about 100 board feet of cedar, and 5 walnut slabs. Some other misc. pieces are shown such as a right-angle jig, a spline jig, table saw miter jig, and box-joint jig. Barely in the foreground is a jointer.


View from the very back looking in toward the entrance. Upper left shows a filtered box fan. On the lower left is a new table saw, and beyond that is a downdraft sanding table. Like the dust collection box, the downdraft sanding table isn’t as useful as it could be because the dust collector doesn’t really have enough oomph. On the right side foreground is the jointer. Further ahead is the bandsaw on the right and beyond that is the worktable. On a shelf under the worktable is my 13-inch Dewalt planer, Bosch circular saw, and Porter Cable Dovetail jig. Eventually I’ll have to make a stand for the planer. It isn’t a big deal to pick it up and down right now but when I’m older its gonna be difficult, for sure, because the damn thing is heavy.


Special closeup view of my table saw. While I got a lot of use out of my Dewalt portable table saw, this thing is a thousand times more useful. Behind the table saw are a ton of empty jars ready for Jennifer Groves to put food in this summer!


Not really shown elsewhere in this photo album is a “closet”. This is the other side of the wall on the right of the entranceway. Inside of this room is a dust collector, shown prominently in this picture. To the lower left in this picture is a dust separator which basically separates the big chips before they make their way into the dust bag. Under the dust collector but not shown is a small air compressor.

“Should we detect screen readers?” is the wrong question

The recent release of WebAIM’s 5th Screen Reader User Survey has heated up a recently simmering debate regarding whether or not it should be possible to detect screen readers. Currently there are no reliable means of determining whether a user with disabilities is visiting your site and, specific to screen readers, this is because that information isn’t available as part of the standard information that is used in tracking users, such as user-agent strings. Modern development best practice has shied away from klunky user-agent detection and instead toward feature detection. The thought then, is that it should be possible to detect whether or not a visitor is using a screen reader. This has drawn sharply negative reactions from the accessibility community, including those who I’d have thought would have been in favor of the approach. In all cases, people seem to be ignoring a more obvious shortcoming of this idea: Accessibility isn’t just about blind people. Accessibility is about all people.

Getting data at a sufficient level of granularity is a bit difficult, but the conventional wisdom around disability statistics is:

  • There are more people who are low-vision than who are blind
  • There are more people who are hard of hearing than who are visually impaired
  • There are more people who are motor impaired than who are hard of hearing
  • There are more people who are cognitively impaired than all of the above

In fact, depending on age group this can vary. The Census Bureau data does validate the claim that across all age groups the percentage of people who are visually impaired is consistently the smallest of all disability types. In other words, if your approach to accessibility has anything to do with detecting screenreaders, you’ve clearly misunderstood accessibility.

But let’s skip that for a moment. Let’s assume you could detect a screen reader as easily as including Modernizr on your site. Now what? What do you do differently? Well, no matter what you do, your approach “solves” accessibility issues for less than 2% of the working-age population. Put another way, whatever money or time you’ve spent on detecting and adapting to screen reader users, you only gotten yourself 1/5 of the way toward being “accessible”. Instead of asking whether it should be possible to detect screen readers, the question should be “how do we make our site more usable for all users?”.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Everything you know about accessibility testing is wrong (part 2)

In Everything you know about accessibility testing is wrong (part 1) I left off talking about automated accessibility testing tools. It is my feeling that a tool of any kind absolutely must deliver on its promise to make the user more effective at the task they need the tool to perform. As a woodworker, I have every right to expect that my miter saw will deliver straight cuts. Further, if I set the miter saw’s angle to 45 degrees, I have the right to expect that the cut I make with the saw will be at an exact 45 degrees. If the saw does not perform the tasks it was designed to do and does not do so accurately or reliably, then the tool’s value is lost and my work suffers as a consequence. This is the case for all tools and technologies we use, and this has been the biggest failing of automated testing tools of any kind, not just those related to accessibility. Security scanning tools tend to generate false results at times, too.

I’m not sure if this is their exact motivation, but it often seems as though accessibility testing tool vendors interpret their tool’s value as being measured by the total number of issues they can report on, regardless of whether those issues are accurate. In fact nearly all tools on the market will tell you about things that may not actually be an issue at all. In this 2002 evisceration of Bobby, Joe Clark says “And here we witness the nonexpert stepping on a rake.” He goes on to highlight examples of wholly irrelevant issues Bobby had reported. From this type of experience came the term “false positives”, representing issues reported that are inaccurate or irrelevant and it is a favorite whipping post for accessibility testing tools.

It would be easy to dismiss false positives as the result of a young industry, because nearly all tools of the time suffered from the same shortcoming. Unfortunately even today this practice remains. For example, in the OpenAjax Alliance Rulesets, merely having an object, embed, applet, video, or audio element on a page will generate nearly a dozen error reports telling you things like “Provide text alternatives to live audio” or “Live audio of speech requires realtime captioning of the speakers.” This practice is ridiculous. The tool has no way of knowing whether or not the media has audio at all let alone whether the audio is live or prerecorded. Instead of reporting on actual issues found, the tool’s developer would rather saddle the end user with almost a dozen possibly irrelevant issues to sort out on their own. This type of overly-ambitious reporting does more harm than good on both the individual website level and for accessibility of the web as a whole.

No automated testing tool should ever report an issue that it cannot provide evidence for. Baseless reports like those I mentioned from the OpenAjax alliance are no better than someone randomly pointing at the screen and saying “Here are a dozen issues you need to check out!” then walking out of the room. An issue report is a statement of fact. Like a manually entered issue report, a tool should be expected to answer very specifically what the issue is, where it is, why it is an issue, and who is affected by it. It should be able to tell you what was expected and what was found instead. Finally, if a tool can detect a problem then it should also be able to make an informed recommendation of what must be done to pass a retest.

False positives (or false negatives, or whatever we call these inaccurate reports) basically do everything but that. By reporting issues that don’t exist, they confuse developers and QA staff, cause unnecessary work, and harm the overall accessibility efforts for the organization. I’ve observed several incidents where inaccurate test results caused rifts in the relationships between QA testers and developers. In these cases, the QA testers believe the tool’s results explicitly. After all, why shouldn’t they expect that the tools results would be accurate? As a consequence, QA testers log issues into internal issue tracking systems that are based on the results of their automated accessibility-testing tool. Developers then must sift through each one, determine where the issue exists in the code, attempt to decipher the issue report, and figure out what needs to be fixed. In cases where the issue report is bogus, either due to inaccuracy or irrelevance, it generates – at the very least – unnecessary work for all involved. Worse, I’ve seen numerous cases where bugs get opened, closed as invalid by the developer, and reopened after QA tester retests it because they’ve again been told by the tool that it is still an issue. Every minute developers and QA testers spend arguing over whether an issue is real is a minute that could be spent on remediation efforts for issues that are valid. Consequently, it is best to either avoid tools prone to such false reports or to invest the time required to configure the tool in a way that squelches whatever tests are generating them. By doing so the system(s) under development are likely to get more accessible and developers less likely to brush off accessibility. In fact, I envision a gamification type impact to this approach of only reporting and fixing real issues. A large number of these “definitively testable” accessibility best practices can often be quick to fix with minimal impact on the user interface. Over time, developers will instinctively avoid those errors as accessible markup will become part of developers’ coding style and automated accessibility testing can remain part of standard development and QA practices, instead finding anomalous mistakes rather than instances of bad practice. This possibility can never exist while trying to decipher which issues are or are not real problems because developers are instead left feeling like they’re chasing their tails.

Current automatic accessibility testing practices take place at the wrong place and wrong time and is done by the wrong people

Automated testing tools can only test that which they can access. Historically this has meant that the content to be tested has to exist at an URL that can be accessed by the tool, which then performs a GET request for the URL, receives the response and, if the response is successful, tests the document at that URL. Implicitly this means that work on the tested document has progressed to a point where it is, in all likelihood, close to being (or is, in fact) finished. That is, unless the tested URL is a “mockup” and the tool resides in or has access to the same environment as the development environment. Historically the experience has been that the tested documents have been deployed already. This is the worst possible place and time for accessibility testing to happen because at that point in the development cycle a vast array of architectural, design, workflow, and production decisions have been made that have a direct impact on the team’s ability to fix many of the issues that will be found. This is especially true when selecting things like front-end JavaScript frameworks, MVC frameworks, selecting colors, or when creating templates and other assets to be presented via a Content Management System. In each case, early pre-deployment testing could help determine whether additional work is needed or whether different products need to be selected. Post-deployment remediation is always more costly, more time consuming, and is less likely to be successful. In all cases, accessibility testing that is performed late in the development lifecycle has a very real and very negative impact, including lost sales, lost productivity, and increased risk to project success. Late testing also increases the organization’s risk of litigation.

The best way to avoid this is, as the popular refrain goes: “Test early and test often”. Usability and accessibility consultants worldwide frequently lament that their clients don’t do so. This website, for instance, happens to perform very well in search engines for the term “VPAT”, and about once a week I get a call from a vendor attempting to sell to a US government agency that has asked for a VPAT. The vendor needs the VPAT “yesterday” and unfortunately at that point any VPAT they get from me is going to contain some very bad news that could have been avoided had they gotten serious about accessibility much earlier in the product lifecycle. In fact, as early as possible: When the first commit is submitted to version control, and when the first pieces of content are submitted in the content management system. Testing must happen before deployment and before content is published.

Stay tuned for part 3 where I talk about critical capabilities for the next generation of testing tools.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Everything you know about accessibility testing is wrong (part 1)

My first experience with accessibility and, therefore, accessibility testing, came from Bobby.

In 1995, CAST launched Bobby as a free public service to make the burgeoning World Wide Web more accessible to individuals with disabilities. Over the next decade, Bobby helped novice and professional Web designers analyze and make improvements to millions of Web pages. This work won CAST numerous awards and international recognition.

CAST no longer supports the Bobby accessiblity testing software. Bobby was sold to Watchfire in 2004 which, in turn, was acquired by IBM in 2007.

Although Bobby is no longer available as a free service or standalone product, it is one of the tests included within the IBM Rational Policy Tester Accessibility Edition software, the comprehensive enterprise application for testing websites. (http://www.cast.org/learningtools/Bobby/index.html)

Bobby was so popular that the above URL remains the #4 result in Google for the word “Bobby”. My first experience with Bobby came in the form of a rejection email for a job application. In the early 2000s I was attempting to get a job as a web developer in the Washington DC area. At this time, Section 508 was new-ish and government contractors such as Northrop Grumman, Raytheon, Lockheed Martin, and the like were very focused on hiring web developers whose work was “508-compliant”. On one occasion, I got a response to my job application asking me to send over some work samples. I responded with a series of publicly available URLs showing off my work and in a few hours received an email saying that they would be unable to hire me because my work had failed a test by Bobby. Bobby, whoever or whatever it was, became the thing interfering with my ability to put food on the table. Unacceptable.

For my part, I did as I always do. I became obsessed with accessibility. Today when people ask me how I got interested in accessibility, I tell them the above story and tell them I have no “legitimate” reason for my interest. In other words, I don’t have a disability myself nor did any of my family members or friends. I don’t have any interesting back-story like Ted Henter, creator of JAWS. Instead, I’ve come to view accessibility as a quality-of-work issue. As a developer, the quality of my work has the direct ability to impact users’ ability to consume and interact with the content I create. To me, persons with disabilities are no different than anyone else using my site. All of the human rights stuff surrounding accessibility is purely ancillary. I’ve done a bad job if users have a hard time. My interest in accessibility is as simple as that.

Perhaps this is due to my being so new to accessibility at that time, but I view that time period fondly as one in which there were incredible opportunities to learn. Among the educational resources I discovered the best, by far, was the WebAIM.org Discussion List. The resources provided on the WebAIM website itself were immensely useful, but the active and friendly atmosphere of their discussion list was and remains the best community for those new to web accessibility. The list of active participants on that list is like a who’s who of accessibility. It didn’t take long before I noted that many of the more notable contributors to the community had a high level of disdain for automated testing tools. This disdain wasn’t altogether unfounded as documented by Jukka Korpela in 2002. In the long term, however, this disdain has created roadblocks to adoption & use of tools for accessibility testing and, in my opinion, has delayed development of newer and better tools for this task. The end result has been the development of tools and procedures that test the wrong thing at the wrong time and fosters an atmostphere of generalized resistance of tools in general.

Resistance to tools in general is somewhat justified

In the accessibility community, resistance to automated accessibility testing tools comes in two flavors: Those who say tools cannot provide complete coverage for all accessibility issues and those who say such tools take the focus off of the user and puts it on the system. Both of these reasons are born from perspectives that don’t fully understand the purpose of automated testing. Further, they fail to consider that although they’re both right that doesn’t actually negate the value of automated testing.

Evolutionary biologists and anthropologists cite two major reasons for mankind’s evolution into the dominant species on Earth: the use of tools and the taming of fire. Beginning with rudimentary stone tools, man’s first tools enabled easier access to food. We could hunt for, butcher, and prepare food more easily through the use of tools. The evolution of tools and technology isn’t unlike biological evolution of a species. Our opposable thumbs, control of fire, and larger brains form the basic trunk with various other aspects of tools and technologies forming the limbs of the tree and more specific tools technologies form the twigs. Like biological evolution, certain types of tools and technologies die off along the way, losing favor and being replaced with better tools and technologies.

Forge welding came about in the Bronze and Iron Ages and remained the dominant form of welding for thousands of years. In the Middle Ages forge welding techniques saw many improvements which remained in use for hundreds of years. In the early 1800s however, the discovery of the electrical arc revolutionized welding and brought about advances that would later evolve into SMAW (Shielded metal arc welding), commonly referred to as “stick welding”. Since the late 1800s and continuing today, newer, better, and safer methods of welding are continually being developed. Today this includes such high tech methods as laser beam welding and electron welding.

In all cases the tools and technologies we employ are aimed at one primary goal: accomplish a task more easily, efficiently, and with higher quality. In the earliest stages of tool development the tools were aimed at doing things we were already doing, but doing them better, such as using rocks to smash nuts or sharp flints to butcher meat. But in the Bronze and Iron Ages our goals were more ambitious. We aimed at doing things we could never accomplish without the tool. As tools and technologies continue to evolve, mankind’s goals remain the same: make things easier, faster, better and make the previously impossible become possible.

At a fraction of the size and fraction of the cost, the smartphone in your pocket holds more than 13,000 times as much data than the first hard disk, the IBM 350 RAMAC, created in 1956, which weighed over 1 ton and cost $10,000 (in 1956 money). Literally everything around us in the modern world is the result of technological evolution including, in all likelihood, the grass on your lawn. This fact is, quite frankly, why I’m so baffled by resistance to automatic accessibility testing. In any case where a capability exists which can replace or reduce human effort, it only makes sense to do so. In any case where we can avoid repetitious effort, we should seek to do so. Of course this isn’t always possible. Even in the previous example of welding, some jobs can’t sustain the expense of robotic welding. Perhaps the job is unique or the production run is too small to justify robotic welding. But that doesn’t prevent the creation of a Welding Procedure Specification and the use of a specific process to create the end product. Automatic accessibility testing is no different.

While there are a number of seemingly insurmountable challenges relating to accuracy or completeness of automated accessibility testing, that doesn’t mean such testing has no value. Automated testing can and should be utilized in a way that makes the tool’s user more efficient and effective at their job – namely, finding accessibility issues. For example, it is trivial for an automated tool to be programmed to find instances of images without alt attributes and therefore no human should ever have to spend time looking for those issues. However, machines are wholly incapable of the subjective interpretation of the meaning behind an image in context and therefore judging the quality of the text in an alt attribute is a task that does require a skilled human. This will probably always be the case, at least as long as the Web as we know it exists.

In other words, an automated tool can tell you when things are bad but cannot tell you when things are good. Additionally, the array of possible “bad” things a tool can reliably and accurately find is somewhat small. My own and others’ estimates suggest that only 25% of accessibility best practices can be tested definitively using automated testing. It is this lack of complete and reliable coverage that detractors of automated testing point to as evidence against the usefulness of automated testing on the whole. The somewhat epidemic issue of existing automated tools returning overly-conservative, irrelevant, or erroneous results only serves to strengthen these claims against automated testing. But this only makes the case against the specific tools, not against automated testing. The fact that automated accessibility testing tools have historically been prone to delivering bad results doesn’t mean good results are impossible.

Stay tuned for post #2 in this series where I discuss the challenges and proper use of tools for testing web accessibility.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

DIY Post: Replacing Portion of Wood Floors

Jennifer Groves and I purchased our house about 7 years ago. Most of the rooms in the house had carpet in them, though two bedrooms had exposed wood floors. Due to the age of the house we figured that if some of the rooms in the house had wood floor it was likely that all of them did. I peeled up the carpet in some corners and sure enough: hardwood. Then I removed all the carpet in the house. Unfortunately our hallway floors were too bad to sand and refinish. We had an aging dog who would sometimes be unable to hold her water. Whenever she’d have an accident it was often in the hallway. We would then clean up using our Hoover steamvac. Unfortunately the steam vac is incapable of getting up all of the water and so over time the wood underneath became stained. The only solutions at that point would be to either put down some carpet again or replace the wood. My choice was to replace the wood.

Replacing the wood involves of course that you first remove the wood you wish to replace.  This is far more difficult than it seems when removing just a portion of the floor.  If you’re removing all of the wood in the house, or at least on that level, then you can just rip out all the floor without any regard for whether you damage the other floor boards.  When replacing a portion of the floor you need to take care not to damage boards adjacent to that portion you’re replacing.

hallway with wood floors removed and bare sub floor exposed.

The two following images show why it is important to take care during removal.  Proper installation of wood floor involves staggering the boards. Proper replacement requires maintaining the staggering of the boards and therefore requires you to remove the boards entirely. This was pretty easy to do in the hallway where the boards went wall-to-wall.

View of doorway area of floor showing staggered boards

Unfortunately removing the entire board is impossible in the doorways.  Back when our house was built in 1956, the boards they used were 13 feet long. Since our hallway is only 6 feet wide, many of those boards traverse from one bedroom to the next, going through the hallway along the way. Removing the boards that went into the bedrooms often involved cutting them to remove them.

Another view of doorway. These boards are cut deeper into the room.

Finding a proper tool to remove these boards was surprisingly hard for me as I’d never done this type of work before. I experimented with a number of different ideas until I heard of the Fein Multimaster. Holy cow. If you have an oscillating tool and it isn’t the Fein Multimaster, you are wrong.  Prior to buying the Fein I had tried a battery-powered Craftsman oscillating tool and it was horribly inadequate at the task of cutting even one of the boards. It didn’t have the muscle or battery life necessary for the job.  Ultimately I had to make about 18 cuts, so the Craftsman tool clearly wasn’t going to cut it (pun intended). I returned the Craftsman tool, ordered the Fein from Amazon and once it was delivered I finished the cuts in about 45 minutes.

Closer doorway view showing the cuts.

The previous owners had, sometime in the past, experienced a pretty bad termite infestation and the board at the top of the steps required replacement. The existing board had been a 2-by-6, so I replaced it and beefed up portions of the under lying floor joists at the same time.

Showing the 2 by 6 board replaced.

The next step in the process is to lay down red rosin paper as an underlayment before laying down the floor.

Hallway subfloor covered by red rosin paper

The only problem with putting down the rosin paper is that now you don’t know where the floor joists are.  The plywood subfloor has nails on it which show where the floor joists are, but with the rosin paper down yo can’t see the nails.  The solution?  My magnetic stud finder.

Close-up of magnetic stud finder on top of red rosin paper

The way a magnetic stud finder works is by being attracted to nails in the stud. Since the subfloor was already nailed down all I had to do was use the same principle: find the nails with the stud finder!  I made marks wherever I found a nail.

Placing marks on the red rosin paper where the stud finder found a nail.

I followed that up with striking a line across those marks. When laying the floor boards, the boards will be nailed to the joists wherever the boards meet these lines.

View of hallway showing lines on the red rosin paper where the floor joists are

The next step is laying boards. In cases where you’re completely starting fresh you need to make sure your first board is straight and that it is square with the room. Note this isn’t square with the wall(s) because the walls may not actually be square.  Anyway, I decided to cheat by keeping the original “first” board since it wasn’t stained and could be finished.  I then started from that board and started nailing down floor.

First three boards nailed down.

You can either rent a flooring nailer or buy one.  Since I know I was actually going to replace more flooring I went ahead and bought one.  If you rent a nailer, your rental is for 24 hours at a time. After a couple of days of rentals you may as well have bought the darn thing.  So I did!

About half the hallway done.

Here’s the fully installed hallway floor!

All of the floor laid down.

Some commenters on Reddit disagreed with the way I dealt with the staggering of the boards, suggesting that the “wood will never match”. Here’s a view of the finished wood in one of the doorways. While the wood isn’t a 100% match, you really have to be looking for it in order to notice.
Finished view of doorway showing the match-up

What about price? All-in-all I wound up paying for the wood, the red rosin paper, the nails, the Fein tool, and the flooring nailer. I already owned the air compressor and other tools I needed. Despite needing to buy the tools, I saved about $200 from the quotes I had gotten from professional installers.

Stay tuned for two more posts on floor installing and re-finishing