Karl Groves

Tech Accessibility Consultant
  • Web
  • Mobile
  • Software
  • Hardware
  • Policy
Telephone
+1 443.875.7343
Email
karl@tenon.io
Twitter
@karlgroves

One. Simple. Question. (and a follow-up)

Several weeks ago, Bryan Garaventa made a post to the WAI-IG mailing list. The email thread went somewhat sideways, because some list members didn’t “get it” but it died down quickly enough. AccessIQ reignited the issue, wondering “…do web accessibility professionals have a sense of humour?” My response? Clearly the answer is NO. Even when a blind guy (Bryan) tries to make a point through humor, people in the accessibility community go on a ragefest about people “making light of accessibility”.

Instead of productive, collaborative discussion about bringing accessibility into the mainstream, accessibility people are too busy fighting with each other and using social media as a sounding board to name and shame everyone whose products aren’t perfectly accessible. I’ve said it before, we need to put the pitchforks down. We need to understand “perfection” isn’t possible and work on making “better” happen instead. For this, I propose we begin focusing on two very simple questions:

Do you agree that it is acceptable to prevent certain classes of users from using your ICT product or service?

This requires only a one word answer: “Yes”, or “No”. I’ve asked people this question before and I often get answers other than Yes or No. People will say “But that depends on [any number of red herring conditions]” and I always try to redirect to the original question. To move the conversation forward, we need to know whether the other person thinks its OK to discriminate. Hint: Nobody thinks that is OK. Or, at least, they won’t admit it in public.

Follow-up: What can you do now to ensure that access for all people is improved?

From there, we can assume that the other party is prepared to move forward with accessibility. We don’t need to continue rambling on about the various reasons why accessibility is good. We’ve gone past that and now its time to act.  But, it isn’t reasonable to expect perfection immediately. It also isn’t reasonable to expect that the necessary resources and knowledge will just magically appear out of nowhere. So the follow up is: given your current knowledge and resources, what action can be taken immediately that will deliver a demonstrable positive result for users? Incremental betterment is far better than impatient expectations of perfection. As we make improvements to what we do and how we do them, we can make things better.

While I’ve previously spent a lot of time writing about selling accessibility I really think the most effective approach is to limit the “selling” to one question. We don’t need to sit there and spin our wheels with red-herring distractions like ROI. Is it right to discriminate or not? No? Awesome. Now what are we going to do now to make sure we don’t discriminate? Do that.

Stop selling. Start leading

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Longdesc – Where are the alternatives?

Non-text Content is “any content that is not a sequence of characters that can be programmatically determined or where the sequence is not expressing something in human language”. Mostly what comes to mind when discussing non-text content are audio/ video content, images, or other graphical content not necessarily image-based. WCAG 1.1.1 Calls for alternatives for non-text content. For basic images, presented in the <img> element, the ‘alt’ attribute is the most-frequent means of providing an alternative. The content you place in the ‘alt’ attribute may vary depending on the image and context but generally “…the general consensus is that if the text alternative is longer than 75-100 characters (1 to 2 sentences), it should not be considered a short text alternative and should not be presented using the alt attribute or the figcaption element…”. In the vast majority of cases, that amount of text should cover you rather well in providing a good, clear, and concise alternative for the image. But what if the image is complex? What if the information portrayed in the image can’t be described effectively in 75-100 characters? One suggestion is to use the longdesc attribute.

Historically, support for longdesc has been rather spotty. Back in 2000, WCAG 1.0 recommended using longdesc, but also acknowledged the lack of browser support and also recommended the use of what was called a D link. In practice, the D link probably saw more popularity than longdesc and its recommendation was pretty pervasive.

Over time longdesc support among user agents has improved, having been added to Opera, IE, and Firefox. Movement toward supporting longdesc has been made by Chrome’s dev team. Screen readers such as JAWS, Window-Eyes, NVDA, and Orca support it, as do many authoring tools. That hasn’t stopped the pushback on longdesc and Apple has stated they have no plans to implement longdesc.

longdesc (as implemented) is a poor solution to a real problem

There should be no argument in anyone’s mind that there’s a real issue that needs to be addressed: effective and informative alternatives to complex and/ or detailed non-text content. There are loads and loads of images on the web which convey things like charts, graphs, and diagrams. How do you describe, in 75-100 words, the components and operation of a 4-stroke engine?

Example complex image: cutaway of 4-stroke engine

Easy. You can’t. There may be other ways to describe this, such as in the same page as the image. Try getting that one past the content people. But longdesc – in its current form – is a crappy way to do this. See, the problem with longdesc is that it is basically only useful for screen reader users. Longdesc essentially locks out sighted users entirely. The image with longdesc isn’t placed in the tab order and there’s no visual affordance provided to indicate the existence of longdesc. Firefox’s implementation provides access to the long description via the context menu which is great if you knew the image has a long description, which you likely won’t if you’re not a screen reader user. As it stands, longdesc is wholly useless to people with cognitive disorders, which is another population that could seriously benefit from long descriptions.

Where are the alternatives?

Ultimately, I have to agree with many of the criticisms of longdesc. But that doesn’t mean I agree with the notion of just doing nothing, either. The fact remains that some images require longer descriptions than the 75-100 characters available to the alt attribute and despite the protestations of longdesc’s detractors, there don’t appear to be any proposed alternatives for implementing a mechanism of supplying long descriptions for non-text content, beyond saying “Fuck it, leave it to the web authors to figure that out”.

Two ideas

Unfortunately, that’s where we are right now if we want a viable means of supplying long descriptions. With Safari/ VO support out of the picture, we can’t rely on partially supported features. Or can we?

See here’s the thing about HTML: You can actually put whatever you want in your markup. You can make up your own elements or attributes, you can even add bogus values in attributes. That doesn’t mean it’ll do anything, but you can put it there. For instance, you can add the old <blink> tag to your page, but it won’t actually blink anymore in any major browser. Similarly, you can still add longdesc to your images. The attribute will still be in the attributes node of the image object. Because it is in the DOM you, the developer, can do something useful with it. Here are two possible ideas:

Dirk Ginader’s longdesc plugin places an unobtrusive icon over the image which represents a link to the long description. Activating the link replaces the image with the long description. Dirk hasn’t done much continued development on the plugin, but its a great starting point and I like the concept.

Today, I created a new Polymer-based Web Component called image-longdesc. It is basically just a different approach to my image-caption component. It places a link to the long description under the image. Remember my 4-stroke engine example? Here it is with a caption and longdesc link:

Screenshot: Prior engine example as a web component with caption and longdesc link under the image

Are these ideas perfect? I don’t know. What I do know is that we’ve yet to see longdesc’s detractors come up with any viable alternatives that address the very real need for suitable alternatives to complex non-text content.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Feature misuse !== feature uselessness

Ugh. Longdesc. For those who don’t follow such things, the fight over the longdesc attribute in HTML5 goes back to (at least) 2008. Back then, the WHATWG was also considering eliminating the alt attribute, the summary attribute, and table headers. Ian Hickson’s blatant and laughable egotism led him to believe he knew more about accessibility than the many actual accessibility experts he was arguing with. In this context, it is no wonder that a lot of people have gotten to the point of just being sick of the topic of longdesc, instead preferring to concentrate on more impactful concerns in accessibility.

While I agree with a lot of the arguments made in Edward O’connor’s Formal Objection to advancing the HTML Image Description document along the REC track I do feel strongly compelled to address the use of the tired argument that I can summarize as “Because web developers misunderstand or misuse a feature, that means the feature must be bad”. In fact I first responded to this type of argument 6 years ago on the HTML5 mailing list in which I stated:

The notion that the decision to keep or eliminate an attribute based on whether it gets misused by authors is amazingly illogical. I would challenge the author to eliminate every element and attribute which is “widely misused” by authors.

For nearly a dozen years now, I’ve been employed in a capacity which gives me a day-to-day glimpse of how professional web developers are using markup. I see HTML abuse on a daily basis. Bad HTML permeates the web due to ignorant developers and is exacerbated by shitty UI frameworks and terrible “tutorials” by popular bloggers. In my years as an accessibility consultant I’ve reviewed work on Fortune 100 websites and many of the Alexa top 1000. I’ve reviewed web-based applications of the largest software companies in the world. The abuse of markup is ubiquitous.

  • I’m working with a client right now who has over 1600 issues logged in their issue tracking system just related to accessibility. Several dozen of those issues related to missing ‘name’ attributes on radio buttons.
  • Across 800,000 tested URLs, Tenon.io has logged an average of 42 accessibility issues per page. This number is statistically significant
  • The average length of an audit report by The Paciello Group is 74 pages long. I recently finished a report that was over 37,000 words long

Regardless of your position on longdesc, citing developer misuse is little more than a red herring.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Video: Prioritizing Remediation of Accessibility Issues (from ID24)

The Paciello Group has recently uploaded all of the sessions from the Inclusive Design 24 event that was held on Global Accessibility Awareness Day. My session was titled “Prioritizing Remediation of Accessibility Issues” as described:

Once you have a report from an accessibility consultant, automated tool, or your QA team, now what? Not all issues are created equal. This session will discuss the various factors which must be weighed in order to make the most effective use of developer time and effort while also having the best possible results for your users.

Watch my video below, including repeated cameos by my mastiff, Poppy and take a look at the whole playlist

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Announcing the Viking & the Lumberjack

At CSUN 2014, Billy Gregory and I gave a presentation titled No Beard Required. Mobile Testing With the Viking & the Lumberjack. The presentation was an absolute disaster. Our approach to the presentation was to “wing it”, showing how to test with various mobile technologies. Thing is, none of the mobile technologies actually cooperated with us. The good news – for us at least – is that Bill and I were entertaining enough for Mile Paciello to have a crazy idea of his own: a web video series called, appropriately, the Viking and the Lumberjack. Today is the day we launch our first episode of (hopefully) many episodes where Billy Gregory both entertain and inform. We hope you enjoy!

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Video of my talk from Open Webcamp 2014

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

[Part 1] The Newb’s Crash Course in Test Driven Development, including Git, Grunt, Bower, and Qunit

Since we launched the private beta of Tenon.io, the feedback has been really positive and, frankly, energizing. But we have more work to do before we’re ready to open the whole thing up for the public. Much of that work centers around tests. We need more tests. Right now, we have a backlog of about 65 tests to write. Some of those tests require additional utility methods so we can keep things DRY. As I was writing one such method, I thought it might be a good topic for an intro to what I call modern web development techniques. I covered this in my recent presentation at Open Webcamp, titled The new hotness: How we use Node, Phantom, Grunt, Bower, Chai, Wercker, and more to build and deploy the next generation of accessibility testing tools. (an obnoxiously long title, I know).

In this tutorial I’m going to go over the basics of starting a project, scaffolding out a project, and give a very quick intro to Test Driven Development. There are a ton of details, nuances, and considerations that I’m going to mostly gloss over, because this tutorial touches on a lot of things and each of them are worthy of multiple blog posts in and of themselves. There are a ton of links throughout the content of this post where you can find a lot more info on these various topics and I really encourage you to explore them.

The general principle of Test Driven Development is this: If you know your requirements, you know the acceptance criteria. Write tests (first) which test whether you’ve met the acceptance criteria. Write the code that passes the test. This approach has multiple benefits, especially when it comes to quality. If you write good tests and you’re passing those tests, then you’re avoiding bugs. Also, as new code is added, if the new code passes its own tests but causes prior tests to fail, then you avoid the new bugs as well. This assumes, of course, that you’re writing good tests. At Tenon, we’ve seen our own bugs arise from tests that didn’t take into consideration some edge case scenarios. In my opinion, this demonstrates the best part about TDD, because all we needed to do was add a new test fixture that matched the failed case, modify the code, checked to make sure we passed the test, and the bug was squashed.

Some background preparation

In this tutorial I’m only making a really tiny jQuery plugin, but we’re going to pretend it is an actual project.
Every single project I embark on has a local development environment and its own repository for version control. Over many years, I’ve learned the hard way that I can’t have a single dev server for everything I do and version control is critical. This is because chances are pretty high that I’ll eventually need to re-use, refactor, or expand on something, even if I consider it purely experimental at the time.

So, the first step for me is always to create the project and set up the version control. I use Git for version control and I use Bitbucket to host the repositories. I type these items in Terminal to get everything started:

mkdir /path/to/your/project
cd /path/to/your/project
git init
git remote add origin git@bitbucket.org:karlgroves/jquery-area.git

So, for the newbs: I’ve made the folder to hold the project using mkdir, I went to it using cd, I initialized the repository using git init and then I added the remote location using git remote add origin. The next step I often take is to set up the new host in MAMP but in this case I don’t need to since it is just a small jQuery plugin being written.

Every bit of code discussed in this tutorial can be found on Bitbucket at https://bitbucket.org/karlgroves/jquery-area. To download & use that code to follow along, do this:

mkdir /path/to/your/project
cd /path/to/your/project
git clone git@bitbucket.org:karlgroves/jquery-area.git

Every feature must be driven by a need

I’m a very strong proponent of Agile software development processes and a very strong believer in requirements driven by a user-oriented need, often referred to as a User Story. Good user stories follow the INVEST pattern. Once a User Story has been defined, it is broken down into the distinct tasks that need to be performed to complete the story. For most user stories, there are likely to be multiple tasks. For this tutorial our user story is simple:

As a test developer, I want to be able to create tests which check for an actionable object’s dimensions.

Given the above, we then need to determine what tasks must be performed in order to complete the story. Since we’re testing for an actionable object’s final dimensions – and because we use jQuery – we want to test the values returned for .innerHeight() and .innerWidth(). This is because border and margin aren’t part of the “hit area” for actionable items. We also want to determine the overall area of the object. So our task in this case is pretty simple:

Create a jQuery plugin that will calculate an object’s dimensions

We determined this to be a story with a single task because it only requires that. But we also determined that, down the road, we may need more than just actionable objects, so we’ll let it be used for any object. In reality this plugin will only work for HTML elements that can take up space. Some elements like <br> don’t take up any space, but we won’t be using this for them.

Set up the project

In reality, this sort of simple plugin doesn’t require its own project, but go with me here.

The first step, after creating a local folder and setting up the Git repository, is to “scaffold” out the project or, get the basic structure in place. One of the best ways out there for this is to use Yeoman. Depending on the nature of your project, there may already be an official Yeoman Generator for your type of project. In fact, Sindre Sohrus has already created one for jQuery plugins. No matter your approach, it makes sense to start out with a basic structure for your project.

I didn’t use the Yeoman generator for this project, mostly because I have my own set of preferences. The best approach, if I planned on making a habit of making jQuery plugins, would be to fork the Yeoman Generator and use it as a basis for my own. Either way, here’s how my structure winds up:

  • Folders
    • src – this is where the source file(s) go. For instance, in a big project involving multiple files, there may be several files which get concatenated or compiled (or both) later
    • dist – this is the final location of the files to be used later. For instance, a project like Bootstrap may have several files in ‘src’ which get concatenated and minified for distribution here in the ‘dist’ folder
    • test – this is where the unit test files go
  • Files in the project root. This holds many of the project related files such as configuration files, etc. Many of these files allow other developers involved in the project to work efficiently by setting up shared settings at the project level.
    • .bowerrc – this is a JSON formatted configuration file. There are a lot more interesting things you can do with this file, but all we’re going to do is tell it where our bower components are located.
    • .editorconfig – this is a file to be shared with other developers to share configuration settings for IDEs for things like linefeed styles, indentation styles, etc. This (helps) avoid silly arguments over things like tabs vs. spaces, character encodings, etc.
    • .gitattributes – this is another file allowing you to do some project-level configuration
    • .gitignore – this lets you establish some files to be ignored by git. You can even find tons of example .gitignore files
    • .jshintrc – One of the Grunt tasks we’ll be talking about is JSHint : “… a community-driven tool to detect errors and potential problems in JavaScript code and to enforce your team’s coding conventions.” The options for the JSHint task can either be put directly into your Gruntfile or into an external file like this one.
    • jscs.json – this is a configuration for a coding style enforcement tool called JSCS.
    • CONTRIBUTING.md – common convention for open source projects is to add this file to inform possible contributors how they can help and what they need to know.
    • LICENSE – another convention is to provide a file as part of the repository which explains the appropriate license type for the project.
    • README.md – finally, in terms of convention, is the README file which provides an overview of the project. The README file often includes a description of what the project is all about and how to use it.
    • jquery manifest file (area.jquery.json) – If you plan on publishing a jQuery plugin, you need to create a JSON-formatted package manifest file to describe your plugin
    • package.json – This JSON-formatted file allows you to describe your project according to the CommonJS package format and describes things like your dependencies and other descriptive information about the project
    • Gruntfile.js – This file allows you to define and configure the specific tasks you’ll be running via Grunt.

Task Automation via Grunt

As described above, we’re going to be using Grunt to automate tasks. To use Grunt you first need to install Node. Once you have node installed, all you need to do is install Grunt via the Node Package Manager (npm) like so:

npm install -g grunt-cli

If you were starting your project from scratch, you’d want to find the plugins you want and follow each one’s instructions to install. Usually the install requires little more than running:

npm install PLUGIN_NAME_HERE --save-dev

So, installing the Grunt JShint plugin would be:

npm install grunt-contrib-jshint --save-dev

For this tutorial, if you’ve cloned the repo for the jquery area plugin, run this instead:

npm install && bower install

This will install all of the dev dependencies for the Grunt tasks as well as installing the jQuery and QUnit files needed for testing.

Let’s back up a second: what is Grunt?

Grunt is a “JavaScript taskrunner”. The goal of Grunt is to facilitate the automation of developer tasks. I discussed automation in an earlier blog post. Like any other tool, the purpose is to allow us to either do things more efficiently or do things we could never be able to do in the first place:

As tools and technologies continue to evolve, mankind’s goals remain the same: make things easier, faster, better and make the previously impossible become possible.

There are some tasks that developers do over and over during their regular day-to-day work which are made far easier through automation. There are even some automation-related tasks developers do which can be further automated. In this regard, Grunt can be seen as a way to apply DRY even to human effort. I’m a huge fan of that idea.

The specific Grunt plugins we’ll be using are:

  • Load Grunt Tasks (load-grunt-tasks) – this lets us do some lazy loading of all of the Grunt tasks.
  • Time Grunt (time-grunt) – this will show how long each task takes. This can be pretty important when running a lot of tasks or a single task (like a bunch of unit tests) that takes a long time.
  • Clean (grunt-contrib-clean) – we’ll be using this one to simply clean out the ‘dist’ folder prior to adding the final compiled plugin
  • Watch (grunt-contrib-watch) – this is a hugely beneficial task for us, because it will allow us to automatically run specific tasks whenever new changes are saved. For instance, we can set it up so that whenever the plugin file is changed, it runs JSHint and JSCS on it.
  • JSHint (grunt-contrib-jshint) – This task does some syntax checking of JavaScript files to detect potential errors. This kind of task can help you avoid pretty silly bugs based on simple mistakes
  • JSON Lint (grunt-jsonlint) – A bit like JSHint, this does syntax checking on JSON files. For us, this specifically saves us from problems with our configuration files which would in-turn cause issues with our tasks running properly.
  • JSCS Checker (grunt-jscs-checker) – JavaScript Code Style Checker, or JSCS, allows us to enforce some coding style conventions for your project.
  • QUnit (grunt-contrib-qunit) – QUnit is the JavaScript unit testing framework we’ll be using.
  • Connect (grunt-contrib-connect) – This task sets up a connect server
  • Uglify (grunt-contrib-uglify) – This task will do code minification on our plugin file and place it in the ‘dist’ folder.

Our Workflow: How Grunt and Qunit come into play

In this scenario we’re going to have some ‘watch’ tasks that run while we’re developing, primarily to make sure we don’t make silly coding style mistakes. Along the way, we’ll do test-driven development: defining our acceptance criteria and coding to meet them. Grunt allows us to automate the performance of tasks that we, as developers, do repetitively. As I’ve said in other posts:

In any case where a capability exists which can replace or reduce human effort, it only makes sense to do so. In any case where we can avoid repetitious effort, we should seek to do so.

This is exactly where tools like Grunt and Gulp truly shine. Instead of repetitively saving the files, then running jshint, then jscs, then qunit, then minifying the source, then copying it over to our dist folder, we can avoid that tedium through automation. We can establish a series of tasks, configured to our preferences, to be automatically run while we work, thus increasing our efficiency and quality.

Up next in Part 2: Actual TDD

At 2200+ words already, we’ll have to reserve the discussion of the TDD process to Part 2. We’ll go through defining the tests, creating fixtures, and writing the code. Stay Tuned!

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

What an incredible start to today

This morning I’m sitting here in bed as I do virtually every morning: Working on one of many programming projects. Sometimes they’re “official” projects and sometimes just experiments. The big difference today is that I’m in San Jose, CA for Open Web Camp. About 45 minutes ago I spoke briefly with my wife. As I put my phone down I noticed a missed call and voicemail from a number I didn’t recognize.

Usually when I get calls like that, they’re from people who want a VPAT written urgently. They had arrived on this site after Googling for “VPAT”, arrived on this post and then called me to tell me they need a VPAT written urgently. I send them over to Brian Landrigan over at The Paciello Group and that’s that. But this call was different:

“I’m not sure if this is the right number to call or not. Based on your voicemail greeting, you sound like you might be younger than the person I’m looking for, but I’m looking for a relative of Fred T. Groves…”

I hung up on the voicemail and called the number right back.

Fred Groves was my uncle. He was a US Marine who died on Iwo Jima in World War II. He was just a boy when he died. My grandfather had signed a special permission form for Fred to join the Marines at 17 and Fred was barely 18 when he died on Iwo Jima. In other words, he shouldn’t have even been at Iwo Jima if he had waited until he was 18. It tore my grandfather apart. Naturally I never got to meet Fred. In fact, my own father was only 4 years old when Fred died.

I spent a few moments on the phone with the guy who had called me. He must’ve been around my age and was calling on behalf of his father-in-law and proceeded to tell me about the stories his father-in-law had about Fred and how he tried to “look after” Fred because he was so young. He was there – literally there – at Fred’s last moments on this planet. To this day, nearly 70 years after Iwo Jima, this guy’s Father-in-law still talks about Fred.

It makes me both proud and sad to have had a relative that touched others so deeply and yet to have been lost so early.

Affirming the Consequent

Today I came across a post by Simon Harper titled Web Accessibility Evaluation Tools Only Produce a 60-70% Correctness which is essentially a response to my earlier critique of a seriously flawed academic paper. I submitted a response on Simon’s site, but I want to copy it here for my regular readers. One thing that specifically bothers me is why do the responses continue to dodge the specific challenges I raise? You cannot claim something without evidence and you cannot supply data for one thing and claim that it leads additional, wholly unrelated conclusions. So, here goes:

Simon,

Good post, and thank you for the response. It is unfortunate, however, that you didn’t read or respond to what I wrote. It is also unfortunate that the paper’s authors have similarly chosen to not respond directly to my statements. The blanket response “well, just replicate it” is an attempt at dodging my response and my [specific] criticisms of the paper (which again, you admittedly haven’t read). Furthermore, there’s little use in attempting to perform the same experiments when the conclusions presented have fully nothing to do with the data.

You said:
“Web accessibility evaluation can be seen as a burden which can be alleviated by automated tools.”
Actually, they don’t say that.

“In this case the use of automated web accessibility evaluation tools is becoming increasingly widespread.”
No data is supplied for this at all.

“Many, who are not professional accessibility evaluators, are increasingly misunderstanding, or choosing to ignore, the advice of guidelines by missing out expert evaluation and/or user studies.”
No data is supplied for this at all.

“This is because they mistakenly believe that web accessibility evaluation tools can automatically check conformance against all success criteria.”
No data is supplied for this at all.

“This study shows that some of the most common tools might only achieve between 60 and 70% correctness in the results they return, and therefore makes the case that evaluation tools on their own are not enough.”

Of all the things you said, this is the only thing actually backed by the data from the paper. Literally everything else is a case of affirming the consequent.

The data that they do present is very compelling and matches my own experience. The significant amount of variation between the tools tested was pretty shocking as well, and once you get past the unproven, hyperbolic claims, it is very interesting.

If this paper’s authors were to gather and present actual data regarding usage patterns (re: the claim that “the use of automated web accessibility evaluation tools is becoming increasingly widespread”) then I wouldn’t be so critical. There is no question that the data needed to substantiate this and similar statements simply isn’t supplied.

Finally, I’d like to address the statement “evaluation tools on their own are not enough”. As I say in my blog post, this is so obvious that it is hardly worth mentioning. No legitimate tool vendor says this. I’ve been working as an accessibility consultant for a decade. I’ve worked for/ along/ or in competition with all of the major tool vendors and have never heard any of them say that using their tool alone is enough. Whether end users think this or not is another matter. Again, it’d be great if the paper’s authors had data to show this happening, since they claim that it is.

The implication from this paper is that because tools do not provide complete coverage, they should not be used. This is preposterous and, I believe, born from a lack of experience outside of accessibility and a lack of experience in a modern software development environment. Automated testing, ranging from things like basic static code linting, to unit testing, to automated penetration testing is the norm and for good reason: it helps increase quality. But ask *any* number of skilled developers whether “passing” a check by JSHint means their JavaScript is good and you’ll get a universal “No”. That doesn’t stop contrib-jshint from being the most downloaded Grunt plugin (http://gruntjs.com/plugins). Ask any security specialist whether using IBM’s Rational Security is enough to ensure a site is secure, and they’ll say “No”. That doesn’t diminish its usefulness as a *tool* in a mature security management program.

Perhaps what we need most in terms of avoiding an “over-reliance” on tools is for people to stop treating them like they’re all-or-nothing.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Tutorial: Creating a PHP class to use with Tenon.io

Introduction

Just wanna get the code? All of the code for this tutorial is available at an open repository on BitBucket

Tenon.io is an API that facilitates quick and easy JavaScript-aware accessibility testing. The API accepts a large number of request parameters that allow you to customize how Tenon does its testing and returns its results. Full documentation for client developers is available in a public repository on Bitbucket. As an API, getting Tenon to do your accessibility testing requires a little bit of work. Tenon users have to do relatively minimal work to submit their request and deal with the response. This blog post shows an example of how to do that with a simple PHP class and also provides a method of generating a CSV file of results.

Despite the fact that it is an API, you can create a simple app very easily. First thing, of course, is that you need a Tenon.io API key. Go to Tenon.io to get one. Right now, Tenon is in Private Beta. If you’re interested in getting started right away, email karl@tenon.io to get your key. The second thing is you need a PHP-enabled server with cURL. Most default installs of PHP on web hosts will have it. If not, installation is easy.

How to use this class

Using this class is super easy. In the code chunk below, we’re merely going to pass some variables to the class and get the response. This is not production-ready code. There are a lot of areas where this can be improved. Use this as a starting point, not an end point.

<?php
require('tenon.class.php');
define('TENON_API_KEY', 'this is where you enter your api key');
define('TENON_API_URL', 'http://www.tenon.io/api/');
define('DEBUG', false);

$opts['key'] = TENON_API_KEY;
$opts['url'] = 'http://www.example.com'; // enter a real URL here, of course
$tenon = new tenon(TENON_API_URL, $opts);
$tenon->submit(DEBUG);

Using the code chunk above, you now have a variable, $tenon->tenonResponse, formatted according to the Tenon response format (read the docs for full details.)

That’s it! From there, all you need to do is massage that JSON response into something useful for your purposes.

Let’s walk through a class that can help us do that.

Give it a name

First, create a file, called tenon.class.php. Then start your file like so.

<?php
class tenon
{
...

Declare some variables

Now, at the top of the file we want to declare some variables:

  • $url – this will be the URL to the Tenon.io API itself.
  • $opts – this will be an array of your request parameters
  • $tenonResponse – this will be populated by the JSON response from Tenon
  • $rspArray – this will be a multidimensional array of the decoded response.
    protected $url, $opts;
    public $tenonResponse, $rspArray;

Class Constructor

Time to get to our actual class methods. First up is our class constructor. Since constructors in PHP cannot return a value, we just set up some instance variables to be used by other methods. The arguments are the $url and $opts variables discussed above.


    /**
     * Class constructor
     *
     * @param   string $url  the API url to post your request to
     * @param    array $opts options for the request
     */
    public function __construct($url, $opts)
    {
        $this->url = $url;
        $this->opts = $opts;
        $this->rspArray = null;
    }

Submit your request to Tenon

Next up is the method that actually fires the request to the API. This function is nothing more than a wrapper around some cURL stuff. PHP’s functionality around cURL is excellent and makes it perfect for this type of purpose.

This method passes through our request parameters (from the $tenon->opts array) to the API as a POST request and returns a variable, $tenon->tenonResponse, populated with the JSON response from Tenon.


    /**
     * Submits the request to Tenon
     *
     * @param   bool $printInfo whether or not to print the output from curl_getinfo (usually for debugging only)
     *
     * @return    string    the results, formatted as JSON
     */
    public function submit($printInfo = false)
    {
        if (true === $printInfo) {
            echo '<h2>Options Passed To TenonTest</h2><pre><br>';
            var_dump($this->opts);
            echo '</pre>';
        }

        //open connection
        $ch = curl_init();

        curl_setopt($ch, CURLOPT_URL, $this->url);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
        curl_setopt($ch, CURLOPT_POST, true);
        curl_setopt($ch, CURLOPT_FAILONERROR, true);
        curl_setopt($ch, CURLOPT_POSTFIELDS, $this->opts);

        //execute post and get results
        $result = curl_exec($ch);

        if (true === $printInfo) {
            echo 'ERROR INFO (if any): ' . curl_error($ch) . '<br>';
            echo '<h2>Curl Info </h2><pre><br>';
            print_r(curl_getinfo($ch));
            echo '</pre>';
        }

        //close connection
        curl_close($ch);

        //the test results
        $this->tenonResponse = $result;

    }

Decode the response

From here, how you deal with the JSON is up to you. Most programming languages have ways to deal with JSON. PHP has some native functionality, albeit simple, to decode and encode JSON. Below, we use json_decode to turn the JSON into a multidimensional array. This gives us the $tenon->rspArray to use in other methods later.


    /**
     * @return mixed
     */
    public function decodeResponse()
    {
        if ((false !== $this->tenonResponse) && (!is_null($this->tenonResponse))) {
            $result = json_decode($this->tenonResponse, true);
            if (!is_null($result)) {
                $this->rspArray = $result;
            } else {
                return false;
            }
        } else {
            return false;
        }
    }

Make some sense of booleans

Tenon returns some of its information as ‘1’ or ‘0’. We’re going to want that to be more useful for human consumption, so we convert those to ‘Yes’ and ‘No’. Because of some weirdness with json_decode and PHP’s loose typing, sometimes digits are actually strings, so that’s why we’re not using strict comparison.


    /**
     * @param $val
     *
     * @return string
     */
    public static function boolToString($val){
        if($val == '1'){
            return 'Yes';
        }
        else{
            return 'No';
        }
    }

Create a summary

OK, now it is time to start doing something useful with the response array. The first thing we need is a summary of how our request went and the status of our document. This method creates a string of HTML showing the following details:

  • Your Request – Tenon echoes back your request to you. This section reports the request that Tenon uses, which may include items set to their defaults.
  • Response Summary – This section gives a summary of the response, such as response code, response type, execution time, and document size.
  • Global Stats – This section gives some high level stats on error rates across all tests run by Tenon. When compared against your document’s density (below), this is useful for getting an at-a-glance idea of your document’s accessibility
  • Density – Tenon calculates a statistic called ‘Density’ which is, basically, how many errors you have, compared to how big the document is. In other words how dense are the issues on the page?
  • Issue Counts – This section gives raw issue counts for your document
  • Issues By Level – This section provides issue counts according to WCAG Level
  • Client Script Errors – one of the things that may reduce the ability of Tenon to test your site is JavaScript errors and uncaught exceptions. A cool feature of Tenon is that it reports these to you.

    /**
     *
     * @return mixed
     */
    public function processResponseSummary()
    {
        if ((false === $this->rspArray) || (is_null($this->rspArray))) {
            return false;
        }

        $output = '';
        $output .= '<h2>Your Request</h2>';
        $output .= '<ul>';
        $output .= '<li>DocID: ' . $this->rspArray['request']['docID'] . '</li>';
        $output .= '<li>Certainty: ' . $this->rspArray['request']['certainty'] . '</li>';
        $output .= '<li>Level: ' . $this->rspArray['request']['level'] . '</li>';
        $output .= '<li>Priority: ' . $this->rspArray['request']['priority'] . '</li>';
        $output .= '<li>Importance: ' . $this->rspArray['request']['importance'] . '</li>';
        $output .= '<li>Report ID: ' . $this->rspArray['request']['reportID'] . '</li>';
        $output .= '<li>System ID: ' . $this->rspArray['request']['systemID'] . '</li>';
        $output .= '<li>User-Agent String: ' . $this->rspArray['request']['uaString'] . '</li>';
        $output .= '<li>URL: ' . $this->rspArray['request']['url'] . '</li>';
        $output .= '<li>Viewport: ' . $this->rspArray['request']['viewport']['width'] . ' x ' . $this->rspArray['request']['viewport']['height'] . '</li>';
        $output .= '<li>Fragment? ' . self::boolToString($this->rspArray['request']['fragment']) . '</li>';
        $output .= '<li>Store Results? ' . self::boolToString($this->rspArray['request']['store']) . '</li>';
        $output .= '</ul>';

        $output .= '<h2>Response</h2>';
        $output .= '<ul>';
        $output .= '<li>Document Size: ' . $this->rspArray['documentSize'] . ' bytes </li>';
        $output .= '<li>Response Code: ' . $this->rspArray['status'] . '</li>';
        $output .= '<li>Response Type: ' . $this->rspArray['message'] . '</li>';
        $output .= '<li>Response Time: ' . date("F j, Y, g:i a", strtotime($this->rspArray['responseTime'])) . '</li>';
        $output .= '<li>Response Execution Time: ' . $this->rspArray['responseExecTime'] . ' seconds</li>';
        $output .= '</ul>';

        $output .= '<h2>Global Stats</h2>';
        $output .= '<ul>';
        $output .= '<li>Global Density, overall: ' . $this->rspArray['globalStats']['allDensity'] . '</li>';
        $output .= '<li>Global Error Density: ' . $this->rspArray['globalStats']['errorDensity'] . '</li>';
        $output .= '<li>Global Warning Density: ' . $this->rspArray['globalStats']['warningDensity'] . '</li>';
        $output .= '</ul>';

        $output .= '<h3>Density</h3>';
        $output .= '<ul>';
        $output .= '<li>Overall Density: ' . $this->rspArray['resultSummary']['density']['allDensity'] . '%</li>';
        $output .= '<li>Error Density: ' . $this->rspArray['resultSummary']['density']['errorDensity'] . '%</li>';
        $output .= '<li>Warning Density: ' . $this->rspArray['resultSummary']['density']['warningDensity'] . '%</li>';
        $output .= '</ul>';

        $output .= '<h3>Issue Counts</h3>';
        $output .= '<ul>';
        $output .= '<li>Total Issues: ' . $this->rspArray['resultSummary']['issues']['totalIssues'] . '</li>';
        $output .= '<li>Total Errors: ' . $this->rspArray['resultSummary']['issues']['totalErrors'] . '</li>';
        $output .= '<li>Total Warnings: ' . $this->rspArray['resultSummary']['issues']['totalWarnings'] . '</li>';
        $output .= '</ul>';

        $output .= '<h3>Issues By WCAG Level</h3>';
        $output .= '<ul>';
        $output .= '<li>Level A: ' . $this->rspArray['resultSummary']['issuesByLevel']['A']['count'];
        $output .= ' (' . $this->rspArray['resultSummary']['issuesByLevel']['A']['pct'] . '%)</li>';
        $output .= '<li>Level AA: ' . $this->rspArray['resultSummary']['issuesByLevel']['AA']['count'];
        $output .= ' (' . $this->rspArray['resultSummary']['issuesByLevel']['AA']['pct'] . '%)</li>';
        $output .= '<li>Level AAA: ' . $this->rspArray['resultSummary']['issuesByLevel']['AAA']['count'];
        $output .= ' (' . $this->rspArray['resultSummary']['issueSummary']['AAA']['pct'] . '%)</li>';
        $output .= '</ul>';

        $output .= '<h3>Client Script Errors, if any</h3>';
        $output .= '<p>(Note: "NULL" or empty array here means there were no errors.)</p>';
        $output .= '<pre>' . var_export($this->rspArray['clientScriptErrors'], true) . '</pre>';

        return $output;
    }

Output the issues

The most important part of Tenon is obviously the issues. The below method gets the issues and loops through them to print them out in a human-readable format. Each issue is presented to show what the issue is and where the issue is. For a full description of Tenon’s issue reports, read the Tenon.io Documentation


    /**
     *
     * @return   string
     */
    function processIssues()
    {
        $issues = $this->rspArray['resultSet'];

        $count = count($issues);

        if ($count > 0) {
            $i = 0;
            for ($x = 0; $x < $count; $x++) {
                $i++;
                $output .= '<div class="issue">';
                $output .= '<div>' . $i .': ' . $issues[$x]['errorTitle'] . '</div>';
                $output .= '<div>' . $issues[$x]['errorDescription'] . '</div>';
                $output .= '<div><pre><code>' . trim($issues[$x]['errorSnippet']) . '</code></pre></div>';
                $output .= '<div>Line: ' . $issues[$x]['position']['line'] . '</div>';
                $output .= '<div>Column: ' . $issues[$x]['position']['column'] . '</div>';
                $output .= '<div>xPath: <pre><code>' . $issues[$x]['xpath'] . '</code></pre></div>';
                $output .= '<div>Certainty: ' . $issues[$x]['certainty'] . '</div>';
                $output .= '<div>Priority: ' . $issues[$x]['priority'] . '</div>';
                $output .= '<div>Best Practice: ' . $issues[$x]['resultTitle'] . '</div>';
                $output .= '<div>Reference: ' . $issues[$x]['ref'] . '</div>';
                $output .= '<div>Standards: ' . implode(', ', $issues[$x]['standards']) . '</div>';
                $output .= '<div>Issue Signature: ' . $issues[$x]['signature'] . '</div>';
                $output .= '<div>Test ID: ' . $issues[$x]['tID'] . '</div>';
                $output .= '<div>Best Practice ID: ' . $issues[$x]['bpID'] . '</div>';
                $output .= '</div>';
            }
        }
        return $output;
    }

Full Usage Example

So now that we have the full class in place, let's put it all together. In the example below, we're taking our request parameters from a $_POST array, such as that which we'd get from a form submission.


<?php
define('TENON_API_KEY', 'this is where you enter your api key');
define('TENON_API_URL', 'http://www.tenon.io/api/');
define('DEBUG', false);

$expectedPost = array('src', 'url', 'level', 'certainty', 'priority',
    'docID', 'systemID', 'reportID', 'viewport',
    'uaString', 'importance', 'ref', 'importance',
    'fragment', 'store', 'csv');

foreach ($_POST AS $k => $v) {
    if (in_array($k, $expectedPost)) {
        if (strlen(trim($v)) > 0) {
            $opts[$k] = $v;
        }
    }
}

$opts['key'] = TENON_API_KEY;

$tenon = new tenon(TENON_API_URL, $opts);

$tenon->submit(DEBUG);

if (false === $tenon->decodeResponse()) {
    $content = '<h1>Error</h1><p>No Response From Tenon API, or JSON malformed.</p>';
    $content .= '<pre>' . var_export($tenon->tenonResponse, true) . '</pre>';
} else {
    $summary = $tenon->processResponseSummary();
    $content .= '<h2>Issues</h2>';
    $content .= $tenon->processIssues();
    $content .= $tenon->rawResponse();
}
echo $content;
?>

That's it! You now have an HTML output of Tenon's response summary and issue details!

Screw it, just gimme the issues

OK, what if you just want the issues and none of that output-to-HTML stuff? Getting the issues into a CSV file is ridiculously easy with PHP. Add this method to your PHP class:


    /**
     * @param $pathToFolder
     *
     * @return bool
     */
    public function writeResultsToCSV($pathToFolder)
    {
        $url = $this->rspArray['request']['url'];
        $issues = $this->rspArray['resultSet'];
        $name = htmlspecialchars($this->rspArray['request']['docID']);
        $count = count($issues);

        if ($count < 1) {
            return false;
        }

        for ($x = 0; $x < $count; $x++) {
            $rows[$x] = array(
                $url,
                $issues[$x]['tID'],
                $issues[$x]['resultTitle'],
                $issues[$x]['errorTitle'],
                $issues[$x]['errorDescription'],
                implode(', ', $issues[$x]['standards']),
                html_entity_decode($issues[$x]['errorSnippet']),
                $issues[$x]['position']['line'],
                $issues[$x]['position']['column'],
                $issues[$x]['xpath'],
                $issues[$x]['certainty'],
                $issues[$x]['priority'],
                $issues[$x]['ref'],
                $issues[$x]['signature']
            );
        }

        // Put a row of headers up on the beginning
        array_unshift($rows, array('URL', 'testID', 'Best Practice', 'Issue Title', 'Description',
            'WCAG SC', 'Issue Code', 'Line', 'Column', 'xPath', 'Certainty', 'Priority', 'Reference', 'Signature'));

        // MAKE SURE THE FILE DOES NOT ALREADY EXIST
        if (!file_exists($pathToFolder . $name . '.csv')) {
            $fp = fopen($pathToFolder . $name . '.csv', 'w');
            foreach ($rows as $fields) {
                fputcsv($fp, $fields);
            }
            fclose($fp);
            return true;
        }
        return false;
    }

Then all you need to do is call it like this:


<?php
define('TENON_API_KEY', 'this is where you'd enter your api key');
define('TENON_API_URL', 'http://www.tenon.io/api/');
define('DEBUG', false);
define('CSV_FILE_PATH', $_SERVER['DOCUMENT_ROOT'] . '/csv/');

$expectedPost = array('src', 'url', 'level', 'certainty', 'priority',
    'docID', 'systemID', 'reportID', 'viewport',
    'uaString', 'importance', 'ref', 'importance',
    'fragment', 'store', 'csv');

foreach ($_POST AS $k => $v) {
    if (in_array($k, $expectedPost)) {
        if (strlen(trim($v)) > 0) {
            $opts[$k] = $v;
        }
    }
}

$opts['key'] = TENON_API_KEY;

$tenon = new tenonTest(TENON_API_URL, $opts);

$tenon->submit(DEBUG);

if (false === $tenon->decodeResponse()) {
    $content = '<h1>Error</h1><p>No Response From Tenon API, or JSON malformed.</p>';
    $content .= '<pre>' . var_export($tenon->tenonResponse, true) . '</pre>';
    echo $content;
} else {
    if(false !== $tenon->writeResultsToCSV(CSV_FILE_PATH)){
        echo 'CSV file written!';
    }
}
?>

Now what?

This blog post shows how easy it is to create a PHP implementation that will submit a request to Tenon, do some testing, and return results. We want to see what you can do with it. Register at Tenon.io and get started!

I'm available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343