Karl Groves

Tech Accessibility Consultant
  • Web
  • Mobile
  • Software
  • Hardware
  • Policy
Telephone
+1 443.875.7343
Email
karl@tenon.io
Twitter
@karlgroves

Is WCAG 2.0 too complicated?

A couple of weeks ago now, an article was posted on LinkedIn that implied WCAG was “Impossible”. Numerous others, including myself, levied sharply negative responses to the article, but not to this specific claim about WCAG being “impossible”. I’d like to help my readers understand WCAG a little bit better.

Generalized statements are particularly false

The first thing to understand is that while it is easy to find things to say that criticize WCAG, many criticisms don’t stand up in context. Many people say WCAG is too long. I say it just feels that way and in many cases you don’t need an encyclopedic knowledge of WCAG and its associated materials. You need what is relevant to you in the context of your work. Sure, the volume of techniques and failures is massive, but the amount that are actually relevant to you is probably much smaller. Think about it: if you aren’t using SMIL or Silverlight or Flash, you can safely ignore those techniques and focus on what applies to you.

Accessibility, as a topic, can seem pretty nebulous if you’ve never been expected to make your systems accessible. I know this from experience. When I first got interested in accessibility, I knew nothing and made a lot of mistakes. Thankfully there were resources like the WebAIM discussion list where people freely shared information in a friendly environment. The regular contributors to that list helped me understand that accessibility isn’t hard when you learn to consider accessibility along the way.

Steps to Understanding WCAG

It is easy to call WCAG “impossible” if you don’t understand it. In fact, this is exactly why Billy Gregory and I came up with the idea of our talk Why WCAG? Whose Guideline is it, Anyway? with The Viking and The Lumberjack at CSUN this spring. Our point during the talk is that there are a number of things people misunderstand and our goal with the talk was to help people understand those things. It generated lots of talk and even controversy but hopefully also helped people understand WCAG a bit more. For those who weren’t there, let me help clarify WCAG.

WCAG is a Standard

WCAG probably feels long, but as a standard it really isn’t very long at all. The presentational format of W3C documents in general definitely doesn’t help, but as a W3C recommendation its normative content is actually pretty short. The full WCAG 2.0 itself is well organized if you stop and let it soak in a minute. It has a clear Introduction that describes its purpose, structure, and associated materials, and the content itself is well organized. The wall-of-text that is the presentation format doesn’t help the impression of its length, but once you understand the organization of WCAG, it won’t feel so long.

Many people have criticized the decidedly high reading level that WCAG is written in. I can’t say I disagree. It is dense information, but it is also clear. We must remember, WCAG is a Standard. It has been incorporated into international laws and (eventually) into United States laws. A standard can’t get to that point if it is contradictory or lacking in detail. At times, the WCAG Working Group agonized over the meanings of individual words – the type of activity I personally have no stomach for – and we should thank them for it. If you ever find yourself not understanding terms and phrases like “changes in context”, consult the glossary. (Hint: you can also ask for clarification on the WAI-IG mailing list)

Peel the Onion to get what you want

As I mentioned above, one of the things that make WCAG seem dense is its presentation. But what most people seem to miss so often is the true structure of WCAG. They discuss this in “Layers of Guidance” that I simplify below:

  • Principles – At the top are four principles that provide the foundation for Web accessibility: perceivable, operable, understandable, and robust.
  • Guidelines – Under the principles are guidelines. There are 12 guidelines that provide the basic goals that authors should work toward
  • Success Criteria – For each guideline, testable success criteria are provided.
  • Sufficient and Advisory Techniques – For each of the guidelines and success criteria in the WCAG 2.0 document itself, the working group has also documented a wide variety of techniques.

The numeric structure of WCAG’s Success Criterion follows the above structure: Principle, Guideline, Success Criteria. To Illustrate this let’s look at 1.3.1 Info and Relationships:

  1. Principle 1: Perceivable
  2. Guideline 1.3: Adaptable
  3. Success Criterion: 1.3.1 Info and Relationship

To understand WCAG and make it feel much less “impossible” you should first understand the Principles. This is the spirit of WCAG – the things that an accessible system should meet for the user. Everything else about WCAG simply provides more detail on how to meet those goals.

Next up are the Guidelines. There are 12 guidelines. These are the high-level goals for each Principle. For instance: “Guideline 2.1 Keyboard Accessible: Make all functionality available from a keyboard.”

Finally, the Success Criteria. In other words, these are the specific, testable criteria against which conformance is to be judged.

I’m of the opinion that those who criticize WCAG as being “impossible” are concentrating on the Success Criterion first without absorbing the Principles and Guidelines first. To truly get the value of WCAG, it is vital to first absorb and understand it from the top down: Principles, then Guidelines, then Success Criteria. The Success Criteria are worded very specifically and clearly. If, at any time, you find yourself feeling like a Success Criteria is confusing, run through it a couple of times with a careful read.

2.1.2 No Keyboard Trap: If keyboard focus can be moved to a component of the page using a keyboard interface, then focus can be moved away from that component using only a keyboard interface, and, if it requires more than unmodified arrow or tab keys or other standard exit methods, the user is advised of the method for moving focus away. (Level A)

To understand the above, first we need to understand that this is part of Principle 2: Operable. The goal for this is “User interface components and navigation must be operable.”. Also, this is the first guideline under Principle 2. “Guideline 2.1 Keyboard Accessible: Make all functionality available from a keyboard.” Let’s parse the Success Criterion itself by breaking it down:

  1. If keyboard focus can be moved to a component of the page using a keyboard interface,
  2. then focus can be moved away from that component using only a keyboard interface,
  3. and, if it requires more than unmodified arrow or tab keys or other standard exit methods,
  4. the user is advised of the method for moving focus away.

Our decision tree for testing this is then:

  1. Can focus be moved to a component of the page?
  2. Can that focus be moved to the component using a keyboard? (If not then there’s a 2.1.1 issue)
  3. Can that focus be moved away using a keyboard?
  4. Can focus be moved away standard exit methods? (read as: tab or shift+tab, but this might depend on the type of control)
  5. If standard exit methods can’t be used, is the different method disclosed to the user?

I realize that there are a couple of pieces of information that a layperson may not know in terms of how to test the above, especially when it comes to “standard exit methods”. This is where WCAG’s large volume of related documents come into play. You should explore these documents as necessary to get the additional information necessary to close the gap in understanding.

Impossible is Nothing

To borrow a phrase from Robert Pearson, “Impossible is Nothing” WCAG is not impossible. It would have never reached Final Recommendation status if it was impossible. Although unquestionably dense in nature, a careful read after understanding its structure can help considerably.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

How long does it take to test 25 Billion web pages

If you started it during the reign of Thutmose I of Egypt, you’d be done soon.
Or you could invest several million dollars.
Or maybe doing it is just a stupid idea in the first place

On July 6, 2016, Michelle Hay of the company Sitemorse published an “article” (a term I’m using loosely here) titled “WCAG 2.0 / Accessibility, is it an impossible standard that provides the basis for excuses?“. Overall, I found the article to be very poorly written, based on a false premise, and a demonstration of extreme ignorance at Sitemorse. Many others felt the same way, and Léonie Watson’s comments address many of the factual and logical shortcomings of the article. Something I personally found interesting in the Sitemorse article are the following two sentences:

What we are suggesting, is to create a list of priorities that can be done to improve accessibility. This will be based on the data we have collected from 25+ billion pages and feedback from industry experts, clients and users.

25 billion pages is a massive number of pages. It is also extremely unlikely to be true and definitely not at all useful . To prove my point, I’ve used Tenon to gather the data I need.

Historically, Tenon averages about 6 seconds per distinct URL to access each page, test it, and return results. There are a number of factors involved in the time it takes to process a page. We frequently return responses in around a second, but some pages take up to a minute to return a response. I’ll discuss the contributing factors to the response time in more detail further down below.

Tenon does its processing asynchronously, which means that it won’t get choked down by those pages that take a longer time to test. In other words, if you test 100 pages it won’t take 6*100 seconds to test each one. The average time needed across the entire set will be shorter than that because Tenon returns results as soon as they’re available in a non-blocking fashion. For example, if one page takes 30 seconds to test, Tenon could easily test and return results for a dozen or more other pages in the meantime. The goal of this experiment is to see how long it would take to test 25,000,000,000 pages using Tenon.

Sitemorse’s article does not disclose any details about their tool. Their website is chock full of vague platitudes and discloses no substantive details on the tool. They don’t even say what kind of testing it does. Regardless, given my personal history with automated tools, I’m fairly confident that across an identical sample size Tenon is at least as fast, if not faster.

Test Approach

The test approach I used is outlined as follows, in case anyone wants to replicate what I’ve done:

  1. I wanted to test at least 16,641 distinct URLs. Across a population size of 25,000,000,000 URLs, this gives us a 99% Confidence Level with a Confidence Interval of just 1.
  2. The list of URLs piped into Tenon all come from a randomized list of pages within the top million web domains listed by Alexa and Quantcast.
  3. The testing was performed on a completely fresh install of Tenon on my local machine. That means no other users on the system, no other processes running, and all available resources being dedicated to this process (subject to some caveats below)
  4. This testing used a Bulk Tester that populates a queue of URLs and submits URLs to the Tenon API at a rate of 1 URL per second via AJAX. It does this testing asynchronously. In other words, it just keeps sending the requests without ever waiting for a response. I could have reduced the time between requests, but it was a local install and I didn’t want to DOS my own machine that I’m also using for work while this is going on.
  5. While the bulk tester does other things like verifying that the API is up and verifying the HTTP status code of the tested page before sending it to Tenon’s API, the elapsed time is tracked solely from the time the API request is sent to the time the API responds. This avoids the count being skewed by the bulk tester’s other (possibly time-intensive) work.

Caveats and Concerns

This test approach carries a few caveats and concerns. In an ideal world, I’d would have deployed a standalone instance that fully replicates our Production infrastructure, including load balancing, database replication, and all of that. I don’t think that’s truly necessary given the stats on my local machine, which I discuss below.

Assessing System

Any accessibility testing software will be subject to the following constraints on the machine(s) hosting it. These factors will impact the ability to respond to the request, assess the document, and return a response:

  1. Available Memory: Memory allows a testing tool to store frequently accessed items in cache. For instance, Tenon makes extensive use of Memcached in order to store the results of repetitious queries on infrequently changing data. The Macbook Pro I used for this has 16GB of 1600 Mhz DDR3 RAM.
  2. Available CPU: The more CPU and the more cores means the server can do more work. The Macbook Pro I used for this has a 2.8GHz Intel Core i7 Processor. The processor has 4 cores.
  3. Network performance: Simply put, the faster the connection between the Assessing system and the Tested System, the less time necessary to wait for all of the assets to be transferred before testing can begin. I’m on a Verizon FiOS connection getting 53MBps both up and down.

Overall, Tenon performs well as a local install. That said, it would be more “scientific” if this was the only thing the machine was doing, but like I said before, it is my work machine. In a Production environment, Tenon is provisioned with far more resources than it needs so it retains its responsiveness when under high demand. Provisioned locally on a Virtual Machine, Tenon doesn’t require very much RAM, but it loves CPU. Although the amount of CPU I provide to the VM is sufficient, I’m also aware that I could easily throw more requests at it if I could fully dedicate all 4 cores to the VM. Also, there were times when the local Tenon install competed heavily for network traffic against Google Hangouts and GoToMeeting. All-in-all, I doubt that across the entire test set the local instance’s power will play too heavily into the results.

Tested System

All of the above concerns apply to the tested system. The following additional concerns on each tested URL may also impact the time needed to return results:

  1. Client-side rendering performance: One of Tenon’s most important advantages, in terms of accuracy, is that it tests the DOM of each page, as rendered in a browser. This gives Tenon significant power and broadens the range of things we can test for. One downside to this is that Tenon must wait for the page and all of its assets (images, CSS, external scripts, etc.) to load in order to effectively test. A poorly performing page that must also download massive JavaScript libraries, unminified CSS, and huge carousel images will take longer to test. For instance, if a page takes 10 seconds to render and 1 second to test, it will take a total of 11 seconds for Tenon to return the response. This is probably the most significant contributor to the time it takes to test a page in the real world.
  2. Size of the document/ Level of (in)accessibility: Among the many factors that contribute to the time it takes to assess a page and return results is how bad the page is. In Tenon’s case, our Test API doesn’t test what isn’t there. For instance, if there are no tables on a page then the page won’t be subjected to any table-related tests. In other words, even though Tenon can test nearly 2000 specific failure conditions, how many of those that it actually tests for is highly dependent on the nature of the tested document – smaller, more accessible documents are tested very quickly. The converse is also true: Larger, more complex, documents and documents with a lot of accessibility issues will take longer to test. The most issues Tenon has ever seen in one document is 6,539.

Results

  • The very first result was sent at 7/12/16 21:59 and the very last result was 7/13/16 18:38.
  • The total number of URLs successfully tested was 16,792.
  • That is 74,340 seconds total with an average time across the set of 4.43 seconds
  • There were several hundred URLs along the way that returned HTTP 400+ results. This played into the total time necessary, but I purged those from the result set to give Sitemorse’s claim the benefit of the doubt.

Total Issues

Minimum 0.00
Maximum 2015.00
Mean 66.89
Median 37.00
Mode 0.00
Standard Deviation 85.79
Kurtosis 49.02
Skewness 4.35
Coefficient of Variation 1.28

Errors

Minimum 0.00
Maximum 2011.00
Mean 47.51
Median 28.00
Mode 0.00
Standard Deviation 67.92
Kurtosis 126.13
Skewness 7.46
Coefficient of Variation 1.43

Warnings

Minimum 0.00
Maximum 464.00
Mean 19.38
Median 1.00
Mode 0.00
Standard Deviation 55.10
Kurtosis 12.39
Skewness 3.64
Coefficient of Variation 2.84

Elapsed Time, (measured on a per URL basis)

Minimum 0.37 seconds
Maximum 49.87 seconds
Mean 9.50
Median 7.50
Mode 6.43
Standard Deviation 7.26
Kurtosis 3.20
Skewness 1.66
Coefficient of Variation 0.77

Using this to assess Sitemorse’s claim

As a reminder, the sample size of 16,792 pages is more than enough to have a 99% Confidence Level with a Confidence Interval of just 1. One possible criticism of my methods might be to suggest that it would be more “real-world” if I tested pages by discovering and accessing them via spidering. That way true network and system variations could have had their impact as they normally would. Unfortunately that would also add another unnecessary factor to this: the time and resources necessary to run a spider. Having all of the URLs available to me up front allows me to focus only on the testing time.

Given this data, lets take a look at Sitemorse’s claim that they’ve tested 25,000,000,000 pages:

At 4.43 seconds per page, it would have taken Sitemorse’s tool 3,509.5 years to test 25,000,000,000 pages running around the clock – 24 hours a day, 7 days a week, 365 days a year with zero downtime. Could they have done it faster? Sure. They could have used more instances of their tool. All other things being equal, running 2 instances could cut the time in half. With an average assessment time of 4.43 seconds, they would need 3,510 instances running 24/7/365 to do this work in less than a year.

(4.43 seconds each * 25,000,000,000) / (60 seconds per minute * 60 minutes per hour * 24 hours per day * 365 days per year)

Using Tenon’s average monthly hosting costs, testing 25,000,000,000 pages would cost them nearly $10,530,000 in server costs alone to run the necessary number of instances to get this analysis done in less than a year. This monetary cost doesn’t include any developer or server admin time necessary to develop, maintain, and deploy the system. The Sitemorse article doesn’t disclose how long the data gathering process took or how many systems they used to do the testing. Regardless, it would take 351 instances to perform this task in less than a decade.

Why have I focused on a year here? Because that’s the maximum amount of time I’d want this task to take. They could have done it across the last decade for all we know. However, the longer it takes to do this testing the less reliable their results would be. Across a decade – nay across more than a year – the more likely that technology trends would necessitate changes to how & what is tested. A few years ago, for instance, it was prudent to ensure all form fields had explicitly associated LABEL elements. Now, with the proliferation of ARIA-supporting browsers and assistive technologies, your tests need to include ARIA in your testing. Data gathered using old tests would be less accurate and less relevant the longer this process took. I realize I’m assuming a lot here. They could have continually updated their software along the way, but I strongly doubt that to have been the case. Keep in mind that this 24/7/365 test approach is vital to getting this process done as fast as possible. Any downtime, any pause, and any change along the way would have only added to the time.

Giving them the benefit of the doubt for a moment, let’s assume they had the monetary and human resources for this task. Even if they did something like this, it also begs the question: Why?

The entire idea is ridiculous

I’m not saying that it isn’t possible to test 25,000,000,000 pages. In fact massive companies could perform such a task in no time at all. But I also think doing it is a ridiculous idea. And when I say “ridiculous” I mean it in the strictest sense of the word. No matter how they performed a project such as using 351 instances across a decade or 3,510 instances for less than a year, or something in between, doing so is an ignorant, uninformed, and useless pursuit. It indicates a woeful lack of knowledge and experience in development, accessibility, and statistics.

In their article they state:

With this information we will consider the checkpoints of WCAG 2.0 and come up with 10 things that should be dealt with to improve accessibility which will all be understandable, manageable, measurable and achievable.

The idea of making such decisions based on rigorous data gathering sounds impressive. I have a lot of respect for approaches draw their conclusions from data rather than opinion. The question that must be asked, however, is whether or not the type of information they seek might already exist or, barring that, could it be gathered using a different/ cheaper/ faster/ more accurate approach. If you were to ask accessibility experts what their “Top 10 Things” are, you’d get a pretty wide variety of answers. You’d probably get things that are vague, overly broad, or driven by personal bias. However, if you were to moderate such a process using the Delphi Method [PDF] you’d probably come to concensus rather quickly on what those “Top 10 Things” would be. In fact, I argue that given a hand-picked list of respected industry experts, this process could be completed in a weekend. This illuminates the first characteristic of Sitemorse’s claim that makes it worthy of ridicule.

The second characteristic that makes this claim worthy of ridicule is the fact that they used an automated tool for this task. That’s right, I’m the founder of a company that makes an automated tool and I’m telling you that using data from an automated tool to do research like this is stupid. This is because there’s only so much that an automated tool can detect. Automated testing tools are not judges. They cannot prove or disprove any claims of conformance and they cannot even definitively tell you what the most frequent or highest impact issues are on a specific page.

Automated testing tools are excellent at doing one thing and one thing only: finding issues that the tool has been programmed to find. Nothing more. Any time you use an automated testing tool, you’re subjecting the tested system against a pre-defined set of checks as determined by the product’s developer. The nature, number, accuracy, and relevance of those checks will vary from one tool to the other. There are a large number of things that cannot be tested for via automation and an equally large number of things that are too subjective to test for.

The application of automated testing results to a process like this is only relevant if it is being used to validate the “Top 10 Things” that were determined by the experts. I believe that taken on their own, the opinions of experts and the data gathered from a tool would differ significantly. For instance, one of the Top 10 issues – by volume – detected by Tenon is for images that have alt and title attributes that are different. The reason we raise this issue is because there’s a likelihood that only one of these values is the actual text alternative for the image. Supplying both attributes – especially when they’re different from each other – leaves you with at least a 50/50 chance that the supplied alt is not an accurate alternative. After all, what could be the possible purpose of providing the differing title? Even though that’s a Top Ten issue by volume, it certainly isn’t going to make any Top Ten list created by experts. In the vast majority of cases this issue could be best characterized as an annoyance, especially because the information is (ostensibly) there in the DOM and can be discovered programmatically.

Finally there’s the complete lack of understanding of statistics and sample sizes. If we assume that the purpose of Sitemorse’s testing of 25,000,000,000 pages is to gather statistically significant information on the accessibility of the web, they’ve overshot their sample size by a ridiculous amount. And again, by “ridiculous” I truly mean worthy-of-ridicule. The size of your sample should be large enough that you’ve observed enough of the total population that you can make reliable inferences from the data. A small sample means that you’ll be unable to make enough observations to compensate for the variations in the data. The sample size, when compared to the population size, allows you to calculate a Confidence Level and Confidence Interval. In layperson terms, the Confidence Level is how “certain” you can be that your results are accurate. The Confidence Interval is also what people refer to as the margin of error. For instance, if you have a margin of error of “2” then the variance in the actual result could be plus or minus “2”. If I said the average result of a survey is “10” with a Confidence Interval of “2” then the actual answer could be between “8” and “12”.

What kind of sample size do you need to make inferences on the accessibility of the entire web? You might think that number would be pretty massive. After all, the total number of web sites is over 1 Billion and growing literally by the second. How many distinct URLs are there on the web?

In August 2012, Amit Singhal, Senior Vice President at Google and responsible for the development of Google Search, disclosed that Google’s search engine found more than 30 trillion unique URLs on the Web… (Source)

Apart from the statement above, getting an authoritative and recent number on the total number of distinct URLs is really difficult. Fortunately it doesn’t really matter because 30 Trillion unique URLs is, for our purposes, the same as Infinity. Sample size calculation isn’t a set relative amount based on the population size. After a certain point, you don’t really add much in the way of reliability to your inferences just because you’ve gathered a huge sample. Once you’ve gathered a sufficiently large sample, you could double it, triple it, or even quadruple it and not get any more reliable data. In fact, doing so is a waste of time and money with zero useful return.

What’s the right size of the necessary sample pages? 16,641. In other words, it is the same for the entire web as it is for Sitemorse’s claim that they tested 25,000,000,000 pages. This is because, as I’ve said, there comes a point where continued testing is wholly unnecessary. Sitemorse claim to have tested 24,999,983,359 more pages than they needed to. A sample size of 16,461 has a 99% Confidence Level with a Confidence Interval of just 1. If you want 99.999% Confidence Level you could bump the sample size to around 50,000 but I’m willing to bet the results wouldn’t be any different than if you’d stuck with 16,461.

In other words, Sitemorse didn’t just overshoot the necessary sample size. Even if we say we want a 99.999% Confidence Level, they overshot the necessary sample size by over fifty thousand times. That’s not being extra diligent, that’s being colossally stupid. They could have gotten the same data by investing 0.0002% as much work into this effort.

What does this mean?

I can’t speak to Sitemorse’s intent in making this claim that they’ve tested 25,000,000,000 web pages. I can only comment on its level of usefulness and logistical likelihood. On these counts Sitemorse’s claim is preposterous, extraordinarily unlikely, and foolish. The level of effort, the necessary resources, overall calendar time, and money necessary for the task is absurdly high. Even if their claim of testing 25,000,000,000 web pages is true, the act of doing so illustrates that they’re woefully inept at doing research and all too eager to waste their own time on fruitless endeavors.

Why do I care about this? What started me down this path was simple curiosity. Tenon has some ideas of some research we’d like to take on as well. It was immediately obvious to me that Sitemorse’s claim of testing 25,000,000,000 pages was absurdly large, but I also immediately wondered how much time such an undertaking would require. I decided to write about it merely because of how absurd it is to test 25,000,000,000 pages.

The Sitemorse article is an obvious sales pitch. Any time someone says they have “special knowledge” but doesn’t tell you what that knowledge is, they’re using a well-known influencing technique. In this regard, Sitemorse isn’t any different than others in the market and certainly not more worthy of negative judgement than others who do the same thing. The only difference in this case is that they used a huge number to try to establish credibility for their “special knowledge” that actually harms their credibility rather than helps it. The reality is that there’s nothing special or secret out there.

An Actual Knowledge Share

Finally I would like to close off this post with a real knowledge share. While SiteMorse are attempting to hold special knowledge based on their research, I believe that the following information doesn’t actually hold any special surprises.

Top 10 issues, by Volume

  1. Element has insufficient contrast (Level AA)
  2. This table does not have any headers.
  3. This link has a `title` attribute that’s the same as the text inside the link.
  4. This image is missing an `alt` attribute.
  5. This `id` is being used more than once.
  6. Implicit table header
  7. This link has no text inside it.
  8. This link uses an invalid hypertext reference.
  9. This form element has no label.
  10. These tables are nested.

Issues by WCAG Level

WCAG Level Count Percent
Level A: 765278 52%
Level AA: 357141 24%
Level AAA: 339664 23%

Issues by WCAG Success Criteria

Success Criteria Num. Instances Percent
1.1.1 Non-text Content (Level A) 110582 7%
1.3.1 Info and Relationships (Level A) 195544 12%
1.3.2 Meaningful Sequence (Level A) 20613 1%
1.4.3 Contrast (Minimum) (Level AA) 352562 22%
1.4.5 Images of Text (Level AA) 255 0%
2.1.1 Keyboard (Level A) 204361 13%
2.1.2 No Keyboard Trap (Level A) 4901 0%
2.1.3 Keyboard (No Exception) (Level AAA) 185056 12%
2.3.1 Three Flashes or Below Threshold (Level A) 23 0%
2.3.2 Three Flashes (Level AAA) 23 0%
2.4.1 Bypass Blocks (Level A) 24286 2%
2.4.2 Page Titled 1033 0%
2.4.3 Focus Order (Level A) 18776 1%
2.4.4 Link Purpose (In Context) (Level A) 139296 9%
2.4.6 Headings and Labels (Level AA) 4324 0%
2.4.9 Link Purpose (Link Only) (Level AAA) 139296 9%
2.4.10 Section Headings (Level AAA) 15289 1%
3.1.1 Language of Page (Level A) 4497 0%
3.3.2 Labels or Instructions (Level A) 24248 2%
4.1.1 Parsing (Level A) 56843 4%
4.1.2 Name, Role, Value (Level A) 103883 6%

Issues By Certainty

Certainty Num. Instances Percent
40% 2371 0%
60% 323036 29%
80% 40453 4%
100% 756559 67%

Issues By Priority

Priority Num. Instances Percent
42% 2363 0%
47% 435 0%
51% 1033 0%
54% 352562 31%
57% 920 0%
65% 56843 5%
76% 5145 0%
81% 15289 1%
85% 161 0%
86% 255 0%
90% 34531 3%
96% 81881 7%
100% 571001 51%

Conclusion

  1. Nobody holds any special secrets when it comes to knowing how to make stuff accessible. If you’re interested in learning about accessibility there are already excellent resources out there from The Web Accessibility Initiative, WebAIM, and The Paciello Group. Each of those organizations freely and openly share their knowledge.
  2. (Not necessarily specific to Sitemorse) Anyone who claims to have special knowledge or expects you to sign up for their special downloadable whitepaper is full of shit and should be treated as such.
  3. Every. Single. Piece. Of. Data. Above. indicates one thing: The nature and volume of accessibility issues that are automatically detectable make it obvious that people constantly make high-impact-yet-easy-to-fix accessibility problems. Ignorance is the #1 roadblock to a more accessible web

Everything that this experiment showed me is that while we know for a fact that there’s only so much that automated testing can find there’s enough people making these common mistakes over and over. Further: you don’t need to run a tool to run a test against 25,000,000,000 pages to tell you this, all you have to do is listen to users with disabilities. Maybe Sitemorse should’ve started there.

Get the full data set used in this.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

The day I had an assault weapon in my car and was confronted by police

A couple of blog posts ago, I wrote about Gun Control and mentioned that I’ve owned a couple of SKS rifles. The SKS fires a 7.62x39mm round, the same as early AK-47s and is also similar to the cartridge used by the M-1 Carbine. The SKS was a military rifle, used by the USSR and supplied to their communist allies in the 50s. It was replaced by the AK-47. While pedantic gun nuts will argue (rightfully) that the SKS isn’t an “Assault Rifle” it is an assault weapon. The differences between the two terms don’t warrant much discussion here. The SKS is the weapon that Micah Johnson used to shoot 12 police officers in Dallas on July 7, 2016. It has an effective range of 400 yards.

I had one in my car one day in June 1992 during an encounter with the police.

One day in June 1992 I was driving down the road and saw some friends playing lacrosse in a field across from a church in our neighborhood. I pulled over next to them and chatted for a while. My friend Mike eventually came by in his 1971 Monte Carlo. In case you’re unfamiliar with the ’71 Monte Carlo, that’s the car that Ace Ventura drove.

Mike’s car looked (and ran) exactly like it, but over the previous winter we did a fair amount of work to it so it would be faster. After a little bit of chit-chat, I told my friends I was on my way to the shooting range and left. As I got in my car, Mike got in his and followed behind me, revving his engine and acting like he wanted to race me. I decided to take the bait. I knew his huge tank of a car didn’t stand a chance against my Corvette. We drove through Linthicum, MD and eventually onto Rt. 170, which had a long straight away along the BWI airport. We made a left turn onto BWI and both of us put the pedal to the floor. My Corvette pulled away easily and I let off the gas and let the car decelerate on its own.

If you look at the right-side of this map, there’s a 90-degree turn. The map makes the turn look tighter than it is, but it is a turn I’d taken in my Corvette at over 60mph before. This time I was going more like 50 mph, but as I got to the apex of this turn, a pedestrian jumped out on front of me. I panicked and jerked the wheel left to avoid him and then right to compensate as the rear end came loose over some gravel in the intersection. It didn’t work, and my Corvette skipped over the median like a rock skipped on a lake. I ended up landing on the wrong side of the road, going the wrong direction. In other words, I spun in a complete circle. I got out of my car, shaken and pissed off. I looked at my car and it was a mess. The entire front suspension was destroyed and the fiberglass front end was cracked up.

It wasn’t long until the police arrived. This was before cell phones, but the Transportation Authority cops patrol the area around the airport frequently. A female cop stopped and surveyed the situation. When she came over to me I explained what happened. She asked me for my license and registration. I gave her my license and then told her my registration was in the glove box. C-III Corvettes don’t have glove boxes in the dash, but rather behind the seats. In 1976, the rear window was vertical and they have this cubby hole that goes underneath the back of the car. The cubby hole is barely big enough to fit a large suit-case. It was just wide enough to fit my rifle in its case.

“I need to get my registration from inside the car”, I told her, “and I have a gun in my car”.

“OK you need to step away from the car”, she responded as she positioned herself between me and the car.

“Its OK. It isn’t loaded. I was on my way to the range”, I explained.

She had me walk further from the car to create distance between me and the gun. She then called the State Police who have a barracks not far from there. A State Trooper came by, grabbed my rifle case from the car, took the rifle out and inspected everything. He then asked me where I was going, what I was doing, etc. Everything, of course, went fine. I had the rifle in a case, it was unloaded, and I had no ammunition in the car, either. As I said, I was on my way to the range.

The events this week, with the police shootings of Alton Sterling and Philando Castile got me thinking about this event so many years ago and how different it was for me. Philando Castile had a Concealed Carry Permit for his gun. In other words, he was legally allowed to carry a loaded handgun and the police shot him anyway.

In both cases, the car’s driver informed the police officer that they had a gun. Both drivers were responsible gun owners cooperating with the police. One of us lived. People can try to argue this isn’t about race all they want, but I shudder to think that maybe Philando Castile would be alive today if he was white.

The hard slog (an introspective humblebrag)

For some reason, this week was a pretty good week for Tenon. We got a large number of upgrades, including annual payments for some of our larger plans. We also got our fair share of purchases of our “Micro” plan, which doesn’t really make us much money but every bit helps. This morning I had to log in to Stripe (our payment processor) to help a customer with an expired credit card. I quickly became astonished at the worldwide reach of Tenon, especially lately. We have paying customers across the US, Canada, France, Scotland, Ireland, England, Norway, New Zealand, Australia, India, Spain, Japan. As SaaS companies go, we’re extremely small. As an accessibility company we’re extremely small. Hell, as any type of company we’re extremely small.

It is easy to feel like everything with Tenon is a hard slog. I am working, at some level, 7 days a week (usually mornings and evenings) on Tenon. Monday through Friday, I wake up in the morning and do an hour or so of catch-up on things like support requests that’ve come through from users in Australia and Asia. I work my normal work hours for TPG and when that’s done, my evenings are filled with family commitments interspersed with Tenon work. My laptop and mifi device come with me as I sit in the car programming while my daughter is at dance. When the family goes to bed, out comes the laptop for more work. But it isn’t just me. Developers and sales guys are also at it as well. It isn’t a stretch to say that someone is working for Tenon 24/7/365.

It is a hard slog. Everything is bootstrapped. Every penny goes right back into the company, paying for development, paying for overhead, paying for long-overdue legal bills. I’ve personally made nothing from Tenon so far, but I still consider it a massive success. I’m humbled by the fact that our growth is almost entirely from word-of-mouth. I’m humbled by the fact that other people believe in us. The slog, however hard at times, is perhaps its own reward. To everyone who has ever said nice things about Tenon to someone else: Thank you

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

I have no sympathy for excuse-making and won’t apologize for that

Almost a year ago, Dale Cruse posted a list of must-follow people involved in accessibility. I was honored to be on that list. Dale described me as “militant about accessibility”. I began working on a follow-up post where I disagreed about the “militant” label. Ultimately I deleted the post largely because it turned into a ramble about being committed to quality over accessibility.

The other day, I posted this on Twitter:

It got a lot of traction on Twitter and is probably one of my most popular Tweets, with 109 “engagements” according to Twitter. It also got one sharply negative response, from Amanda Rush, who was inspired to write a blog post of her own. A series of 140 Character-at-a-time posts on Twitter isn’t really a sufficient method of communicating ideas so I’ll do it here.

I’ve described this a bit before, but in the very late 90s I created my first web pages. I was working in the music industry at the time and someone else had made a website for me, but getting them to do updates was sort of a pain so I decided to learn how to do so on my own. Over time I got more serious about web stuff and frequented a number of Usenet newsgroups like alt.www.webmaster and comp.infosystems.www.authoring.html. There, I forged friendships that remain to this day with people like Brian Huisman and William Tasso. I also interacted with other people like Mike Davies, David Dorward, Jukka Korpela, and Patrick Lauke – all of whom opened my eyes to accessibility. They often commented about web accessibility, often saying simply “How would this work for a person on a screen reader?”. They got me thinking about my sites’ in the context of the user. They soft-selled me into giving a shit about the person who has to interact with my work. I owe a lot to them for opening my eyes. The user matters.

Starting with my very first jobs as a professional web developer I’ve heard every excuse imaginable for why someone doesn’t do more about accessibility – up to and including outright hostility:

  • …but how many blind people actually visit our site?
  • …but aren’t most people with disabilities unemployed?
  • …if people with disabilities need some help, they can just ask a friend
  • People with disabilities aren’t in our target demographic

The list goes on. In the meantime, the accessibility community goes on the defensive, trying to construct spurious business case arguments for accessibility.

Enough is enough. Accessibility is a civil right, end of story. I dare everyone reading this to stand up in their workplace and replace “blind people” or “people with disabilities” in the above list with “black people” or “Jews” and blurt it out. Scared to sound like a racist jerk? Good.

This isn’t “appropriation”, as Amanda Rush claims, this is about illuminating the fundamental failure of judgment that drives this type of excuse-making. Accessibility is a civil right. I’m not trying to claim that ICT products and services need to be perfect. My argument is that these excuses are offensive and worthy of ridicule. Don’t agree with me? That’s fine. You should be aware, however, that the US Department of Justice, the Dept. of Education’s Office of Civil Rights, lawyers, judges, and lawmakers across the country agree with me, as proven by my List of web-accessibility related litigation and settlements. Every single one of these lawsuits was a prayer for relief based on a claim of violation of the plaintiff’s civil rights.

Accessibility is a Civil Rights issue, plain and simple. I have no patience for excuse-making around accessibility and I will not apologize for pointing out that it is a Civil Rights issue that exists on the same level as racial discrimination.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Gun Control

I’m a gun owner. I purchased my first firearm 25 years ago, when I was 18 years old. My friend Jason was very enamored with guns. At the time, he had a .22 caliber Marlin rifle and took me to a local shooting range. I enjoyed shooting and was pretty good at it. Well, I was good enough at it that it encouraged me to do it more often. I liked doing something that required skill and that didn’t come easy but that wasn’t so hard it was discouraging. So a few months later when he invited me along to a gun show, I took the chance. I can’t remember whether it was that gun show or another, but before long I became the owner of an SKS. At the time, the market was flush with them and you could purchase one for around $100, still greased up with Cosmoline. The SKS was fun to shoot and far more powerful than the .22 Marlin. Since that time, I’ve owned several other firearms, including a 9mm pistol. I currently own only two firearms: the 9mm and a rifle.

Gun control in the United States is a major wedge issue and like most wedge issues I don’t really agree with either side. The fact that the issue is so divisive and feelings run so strong makes it particularly frustrating and difficult to have a calm, rational discussion about. My friends in other countries seem to think that gun violence is an easy issue to solve. “Just stop letting people have guns” is really easy to say but far less easy to do when the right to have guns in the first place is central to the founding of your country. The Bill of Rights is extremely important, both from a legal perspective and a historical perspective. These first ten amendments to our Constitution were written, in large part, to appease anti-Federalists. These were people who weren’t very crazy about the idea of a central government and had really strong feelings about how we came to fight for our independence in the first place. Even with a casual read of the Declaration of Independence and Constitution it is obvious to see that these were people who were pissed at England and weren’t having any more of that monarchy crap. The Bill of Rights was created specifically to spell that out. Each of the ten amendments in the Bill of Rights were written in reaction to the injustices the colonists suffered at the hands of monarchy. In fact, it is not an exaggeration to say that without the Bill of Rights, there would be no United States.

In the case of the Second Amendment, everyone tends to have their own interpretation of what it means and even about why we have it in the first place. I don’t think too many people disagree, however, that the core message of the Second Amendment is the right for people to defend themselves. The inability for colonists to defend themselves was a huge issue at the time and one that Ben Franklin writes about extensively in his autobiography. In it, he discusses the numerous times that he tried to get English troops to help defend colonists against Native Americans and/ or allow the colonists to have guns so they could defend themselves. So while the obvious arguments that the Second Amendment is tied to the English Bill of Rights of 1689 hold true, the simple ability to defend oneself from any aggressor, be it a tyrant or not, was seen as important enough to the founding fathers to include it in the Bill of Rights. To many, the type of aggressor doesn’t matter, up to and including the Federal Government itself. In fact some would say that defense against tyrants is also central to what the Second Amendment is all about. In other words, you can’t just say, “OK everyone turn in your guns. Gun shops and gun manufacturers, ya’ll gotta shut down now”, because the entire spirit of each citizen being able to defend ourselves no matter what type of aggressor we face is central to who we are as a people.

I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regimen of their barbarous ancestors.Thomas Jefferson

The problem is that it is 225 years after the ratification of the Bill of Rights and we don’t need to defend ourselves from aggressive Native Americans, tyrants, external aggressors, or even terrorists. The people we need to be made safe from are each other – specifically those who have guns. Last night, a lone gunman with an AR-15 and a handgun shot up a nightclub in Orlando Florida and killed 50 people. We are only half-way through 2016 and there’ve been 113 mass shootings.

Every Day on Average (all ages)

Every day, 297 people in America are shot in murders, assaults, suicides & suicide attempts, unintentional shootings, and police intervention.

Every day, 89 people die from gun violence: 

  • 31 are murdered
  • 55 kill themselves
  • 2 are killed unintentionally
  • 1 is killed by police intervention
  • 1 intent unknown.

Every day, 208 people are shot and survive:

  • 151 shot in an assault
  • 10 survive a suicide attempt
  • 45 are shot unintentionally
  • 2 are shot in a police intervention

Key gun violence statistics

According to the CDC the per-capita rate of death by firearm is the same as death by motor vehicle accident. While many government agencies such as NIH, CDC, NIST, and NHTSA have funded extensive research into improving motor vehicle safety, conservative lawmakers actually fought to ban research into gun violence. This isn’t false equivalence. If we’re to assume that it is every American’s right to bear arms then we must ensure the safety of the public as well. If we can put into place sensible safety features and regulations around ownership, operation, and safety for motor vehicles, why is it that we can’t study the causes of gun-related injuries and put into place other sensible regulations?

As a gun owner, I am very serious about gun safety. It is my responsibility as a gun owner to ensure safe storage when not in use and safe handling during use. I think responsible gun owners will agree with this. But this is not enough. We are not doing enough to ensure public safety. We are not doing enough to keep guns out of the hands of bad guys and people who are emotionally unable to handle the responsibility of gun ownership. We are not doing enough to regulate ownership criteria and training of new owners. In fact, we are effectively doing nothing. In order to decrease gun violence in our country that needs to change.

Is WCAG too long?

Yes.

And no.

But mostly it just feels that way.

I just got home from this year’s CSUN Conference and, as always, it was a wonderful time. Like many people, I find myself feeling very energized. The overall feeling of camaraderie at CSUN leaves you feeling like you have an army standing behind you as you venture forth to make the world more accessible. One thing bothers me a bit about this year’s CSUN that’s lead to this post.

On Friday, Billy Gregory and I presented Why WCAG? Whose Guideline is it, Anyway? with The Viking and The Lumberjack. This talk focused on some humorous criticisms we have of WCAG and how people – primarily those who are not experts – can be tripped up by WCAG. Like everything we do, Billy and I attempted to use humor to help clear up some of these points of confusion. The talk was standing-room only and was well-received by the audience. But that didn’t stop some people from objecting to what we had to say.

After Cordelia Dillon put out the above tweet, she was quickly “corrected” by David MacDonald to clarify that WCAG was only 36 pages. Regardless of the accuracy of Cordelia’s tweet, this marked the 2nd time that David MacDonald has chosen to comment on the substance of this talk despite having not been in attendance. Whether or not the “Normative” portion of WCAG is 36 pages or not, David lacked the context necessary to understand what was being said. More importantly, David’s knee-jerk reaction is even more ridiculous when you consider that the Viking & the Lumberjack are very definitely not the first people to make this observation and David knows it. Public criticisms on WCAG’s length have happened since 2006 – 2 years before WCAG reached final recommendation status. I’m not about to get into the history and drama of the WCAG Working Group prior to 2.0’s release because it isn’t relevant to this discussion, but suffice it to say that this had been a topic of discussion prior to Joe Clark’s ranticle on ALA.

It isn’t (just) the length, it is the density

David is correct in saying that the normative information – the actual standard – of WCAG 2.0 is only 36 pages long. Regardless of whether or not the standard itself is only 36 pages, people tend to lump the associated materials into what they collectively refer to as WCAG. In other words, it is an exercise in pedantry to correct people who claim WCAG is too long because the actual standard is only 36 pages. What people refer to as WCAG also includes the informative portions. How to Meet WCAG 2.0 is 44 pages, Understanding WCAG 2.0 is 230 pages, and the Techniques and Failures for WCAG 2.0 is 780 pages. In full disclosure this makes our claim at our presentation to be far higher than reality. What we wanted to convey however is that it feels immense.

The writing style of the actual WCAG Recommendation needs to be written the way it is. Every word of a document like WCAG is important. Every word and phrase has a specific meaning. Because WCAG has been adopted as an ISO standard and because WCAG is (or will be) incorporated into a number of regulations throughout the world, the wording must be explicit and detailed regarding the requirements for conformance. But the informative content, such as that within “How to meet WCAG 2.0” has no such requirement. Despite this, the How to meet… document has an overall grade level of 9.6 and the Understanding… document has an overall grade level of 10.7. Individual entries in the “Understanding…” documents hover around a grade level of 10. The document that discusses Understanding techniques has a grade level of 12!. Clearly, the writing style of the informative content does not help when it comes to peoples’ perception of the content’s length. Also, there’s ample opportunity for people to say, “WTF WCAG”:

Creating components using a technology that supports the accessibility API features of the platforms on which the user agents will be run to expose the names and roles, allow user-settable properties to be directly set, and provide notification of changes. (G10)

I know what this means. But I’ve been professionally involved in accessibility for more than a decade. You could say that I know enough about accessibility that I don’t need to read the above technique to know what it means to conform to SC 4.1.2. The above technique title is simply not any more easy for the layperson to parse than the Success Criterion itself! I admit, this is a particularly bad case. Some of the other technique titles are short and clear. But G10 has plenty of friends, like this one.

The Presentation Doesn’t Help

There’s really no nice way to say this. The WAI site and their deliverables are not attractive. They are mostly just walls of text that exacerbate the content’s readability problems. Recent changes, such as the redesigned Quick Reference are huge steps forward, but the vast amount of pre-existing information presented in typical W3C-wall-of-text-standard-document format does not do this information any favors. The current list of WCAG WG members contains a number of participants with extensive UX experience. The WAI should leverage these resources to work on redesigning the presentation of informative materials to make them easier to navigate and understand by the layperson.

Do I think WCAG is too long?

Not really. I think the writing style and presentation of the associated materials makes WCAG feel too long. I think that the WCAG Working Group should be commended for recent redesigns of some supporting materials and they should continue these efforts. In addition, I think they should strongly consider adopting well-established plain language practices when authoring or revising their materials. This is especially true when it comes to the "How to meet…" and "Understanding…" documents which are often sorely needed by those who are new to accessibility. These steps should help alleviate some of the many well-founded criticisms of the content’s length.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Conference talks are not sales pitches – a preemptive rant

It is less than a week before the Annual International Technology and Persons with Disabilities Conference, affectionately known as “CSUN” (as in, “see sun”) after the California State University, Northridge whose Center on Disabilities hosts the conference. I’ve taken this week as vacation and my talk still isn’t prepared. This is probably the least prepared I’ve ever been for a talk, so I’d better get on it. But first, I want to rant.

My first public speaking engagement (in the tech space) was 2004 at the 51st Annual Conference of the Society for Technical Communications. Over the last 12 years I’ve spoken in 7 countries and nearly 20 states. There’s a particular practice at conferences that I think is a bit deceptive, which is using a conference talk as a veiled attempt at a sales pitch. The way this works is this: a compellingly-titled talk is instead a 40-minute spiel revolving entirely around the product that’s made by the presenter’s company. The more closely that the talk’s topic relates to something the product does, the more likely that the talk is merely a sales pitch.

This practice is common among industry conferences, even in non-technical industries. But at CSUN, it just feels more wrong. In full disclosure: I’ve done this myself. I did it at the Open Web Camp, 2014. While I tried my best to avoid it, given the topic it was pretty inevitable that it be focused on Tenon. At the same time, I ended up feeling like I had tricked the audience into listening to me sell Tenon. If I ever end up doing another talk that is all about Tenon, I will clearly disclose that in the title or description.

My challenge to speakers at CSUN 2016

End the veiled sales pitches. End the deception. People spend $555 for the conference and around $300 a night for the hotel. Many people fly in from all over the world to attend, and some people pay for this out of their own pockets. CSUN attendees deserve better than to be treated like a room full of sales prospects. The attendees are there to learn and to exchange ideas, not be pitched. You can save that for social events and impromptu conversations in the hallways.

Putting my money where my mouth is

I have two talks, one is a co-presentation with Billy Gregory and another is a solo talk, titled “Extreme Accessibility”. I will not mention Tenon in either talk. I will not even wear a Tenon t-shirt while presenting. If I break this promise, I’ll buy everyone in the room a beer at Redfields bar. Want to talk to me about Tenon? Awesome. Catch me in the hallway, at a social event, or email me ahead of time to set something up.

For the attendees

If you find yourself in a session that’s really just a sales pitch? Walk out. The general session schedule is chock full of great talks. Find one where the speaker respects you enough not to spend the whole time doing a live infomercial.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

The Accessibility & SEO Myth

No, Accessibility doesn’t lead to better SEO. More importantly, this isn’t a good business case argument. It is time to put this one to rest.

In order to formulate a good business case argument, you must be able to prove that Taking action 'X' will have 'Y' consequences and in this case the argument is that improving accessibility will improve SEO. The implication that follows is that this will somehow make the organization more money or otherwise help the organization reach its defined goals, where more visitors equals higher possible achievement of those goals. This is only true if you weigh a handful of accessible development techniques with inordinately high levels of importance.

The Web Content Accessibility Guidelines Contains:

  • 4 Principles
    • Split to 12 Guidelines, which are then
      • Split into 61 Success Criterion

The informative supplemental material for WCAG defines approximately 400 Techniques and Failures. At the time of this writing there are 93 Common Failures for WCAG. I’m of the opinion that saying “Accessibility Improves SEO” is greatly over-selling accessibility.

A Google search for "Search engine ranking factors" displays a number of results featuring leaders in the SEO/ SEM industry that outline the many factors that improve SEO. The vast majority of the identified ranking factors have no relationship of any kind with Accessibility. In fact, even many of the “on-page factors” don’t have much relationship with Accessibility.

Accessibility and SEO intersect in the following places:

  1. Page titles
  2. Headings
  3. Alt attributes
  4. Link text

In the entire list of 400 WCAG Techniques and failures, 21 of them relate to the above list of items. In other words, only 5% of WCAG techniques are correlated with SEO. None of this means that those 21 techniques aren’t important, they definitely are. Titles, headings, and link text are important navigation and wayfinding aids for users. But that’s not the same as claiming better accessibility results in better SEO.

“Better SEO” is not an accessibility business case and this myth needs to go away.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

The incredible ugliness of political bias and our abandonment of logic & reason

Wise people (namely, Jennifer Groves) often say that you shouldn’t discuss politics or religion in a professional environment. Since most of the stuff I post about on this blog is work-related, I suppose this post is a little unwise. Those who know me on Facebook know that I post a fair amount of political stuff there and would rightly assume I’m pretty progressive, so I’ll try to leave my own biases out of this post.

A few weeks ago Maryland Senate President Mike Miller sent out a survey via email. I’m not sure how I got on his mailing list, especially since he does not represent my district. Still, I think it was awesome. The survey asked for respondents to weigh in on a number of things that will be coming up in this years’ legislative session. I have no idea what will be done with the survey information but I think it is a great idea for legislators to be reaching out to their constituents in this way. Why don’t more people do this?

I decided I wanted to share the link on Facebook. In the 2012 presidential election cycle, there were a few places that political discussions relating to Maryland politics took place but I could not find those same ones anymore and the ones I could find were ghost towns. I found one, however, aptly called “Maryland Politics”. Topics there appeared to be balanced in nature, though one very active participant tended to editorialize quite a bit when posting news items. Still, it didn’t seem too bad, at first. Since the general atmosphere tended to lean more toward my own political tastes, I stuck around. After sticking around it became more and more obvious that this group functioned more and more as the group creator’s own biased sounding board than a venue for actual discussion.

Naive, I know. Politics in this country is ugly and populated by candidates who run around chasing opinion poll after opinion poll, bloviating about whatever is the hot topic of the day that day and issuing reductionist statements that can fit into soundbites of appropriate size and simplicity so that they can be regurgitated by talking heads on the news channels frequented by their target demographic. This idea was parodied with laser-sharp accuracy in a Family Guy episode titled It takes a village idiot, and I married one:

Though clearly more intelligent than her opponent, Lois’ campaign falters as Mayor West proves more politically savvy than she is while Lois bores voters with detailed plans to improve the city, Mayor West garners support simply by avoiding answering questions and acting in a patronizing manner. Brian, observing that “undecided voters are the biggest idiots in the country,” advises Lois to dumb down her campaign. She soon discovers that she can generate support merely by dropping controversial terms such as “Jesus” and “terrorists” in meaningless ways, and by answering questions about her policy plans only by saying “9/11.” She wins the election, and continues to use fear tactics to raise funds for cleaning up the lake.

It seems like a chicken & egg scenario. Are the voters idiots? Are politicians just playing to our baser instincts? As I grow older I feel like we’re careening toward a world where Idiocracy is more like a documented prediction than a comedy movie. I think we should expect more out of our politicians, our peers, and ourselves.

Everyone has their biases, whether they admit to them or not. I have my own. But believing in and spreading baseless, deceptive, or wholly untrue allegations is unethical, in my opinion. Each voter deserves the right to understand and assess each candidate and each political issue based on actual facts, logic, and data: Voting records, position statements, recorded statements in the media and on the debate floor. Refusing to consider, discuss, share, or even acknowledge factual information is deception. Ultimately each person needs to determine which candidate fits our own preferences for what makes that candidate the right choice. Thankfully we live in a time when researching this information has never been easier. The for-profit news media is not a reliable source of such information. While my fellow liberals are quick to dismiss Fox News as biased, the other media outlets are just as biased. Even CNN and MSNBC have been accused of conducting a media blackout of Bernie Sanders.

You can and should avail yourself of resources that are better than the news media

(Mostly) Unbiased resources exist, mostly in the form of non-profit organizations who identify themselves as “watchdogs” or “think tanks”. Some of these can still be biased, depending upon who founded them and who funds them. Here are a few things to look out for:

  • If they’re focused on one political cause then obviously their positions and coverage will have a myopic focus on that cause. They will support those who support them and vilify those who do not.
  • If they’re founded by a single person or group of people who lean toward a single political ideology, so will the organization as a whole. Is the board full of ex-staffers from the Clinton administration? That organization will lean to the left. The converse would be true if they’re all from the former Bush administration.
  • If their position papers and blog posts would fit in well with a clearly-biased mainstream media source like Fox or Daily Kos, then they’re biased.

Resources I’d Recommend

Of the above, the Sunlight Foundation is particularly important based on the tools they provide for accessing important data.

Obviously making use of some of these resources involves a little more time and energy than flipping on your favorite news channel. You owe it to yourself to take the time to at least peruse a few of the fact-checking websites to verify whether what you’ve heard in the media is true or not. Come election day, vote based on facts, not rhetoric.