Karl Groves

Tech Accessibility Consultant
  • Web
  • Mobile
  • Software
  • Hardware
  • Policy
+1 443.875.7343

The form field validation trick they don’t want you to know

Yes, that was a purposefully click-bait headline.

One of the most frustrating things for users is unclear or unintuitive form constraints. My personal pet peeve are phone number, credit card, or SSN/ EIN fields which ask for numeric-only entry. While it may very well be necessary that your field use only numeric data, you don’t have to offload that requirement to the user. If, for instance, your field collects a North American telephone number, you know that a valid telephone number consists of 10 numeric characters. Instead of offloading the numerics-only constraint to the user, you can easily and simply strip the non-numeric characters yourself before then validating the string length. This seems far more intelligent and certainly more user-friendly.

Here’s how

Because so many things require string manipulation most, if not all, programming languages have some mechanism of finding, substituting, or removing sub-strings, often through the use of Regular Expressions. Here are some examples, shamelessly stolen from Code Codex:


#include <string.h>  
while (i != strlen (buff))  
    if ((buff[i] >= 48) && (buff[i] <= 57))  
        buff_02[j] = buff[i];  


removeNonNumbers :: String -> String
removeNonNumbers = filter Char.isDigit


static String numbersOnly(String s) {  
    return s.replaceAll("\\D","");  


s.replace(/\D/g, "");


s{\D}{}g; # remove all non-digits


preg_replace('/\D/', '', $string);  


import re  
re.sub("\D", "", s)  


s.gsub(/\D/, "")
I'm available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

(re) Announcing A11yBuzz.com

On July 30, 2011 I posted My Challenge to the Accessibility Community: We Need an Accessibility Body of Knowledge in which I lamented:

The fact that there is no single source to get good, clear, peer-reviewed information on this topic is, in my opinion, a very huge barrier which prevents “outsiders” from participating in accessible development.

The post was received favorably by some in the accessibility community who agreed. Others felt differently, asserting that there’s already a lot of information sharing happening on the web as a whole. Private discussions took place among those who were interested in this idea but ultimately no direct action was taken to actually put together a single, cohesive body of knowledge.

In response, I created A11yBuzz as a way to crowdsource a single resource for accessibility information on the web. A11yBuzz was released for public use on January 01, 2012. But, it was never really finished. I wanted to add features that allowed for entries to be voted on and reviewed. At CSUN 2013, Mike Guill and I presented a redesigned version of A11yBuzz which had these features. Unfortunately that version was never finished, either. Mike had a series of relocations, and a new adoption. I had shifted all of my energies to Tenon and A11yBuzz just withered away.

This August, Joe Dolson and I talked about re-launching A11yBuzz as a complete refactor based on WordPress. Joe completely ran with it, using Mike Guill’s redesign and my Day One Theme as a starter as well as some of his WordPress plugins and custom programming, Joe has done an amazing job.

The rest is up to you, my accessibility friends! For the site to be successful, it needs contributions. At this point, the site has a pretty good number of resources on accessibility, but it could be better. To achieve that goal, we need more contributions. If you want to contribute, go to A11yBuzz.com and register to help!

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

One. Simple. Question. (and a follow-up)

Several weeks ago, Bryan Garaventa made a post to the WAI-IG mailing list. The email thread went somewhat sideways, because some list members didn’t “get it” but it died down quickly enough. AccessIQ reignited the issue, wondering “…do web accessibility professionals have a sense of humour?” My response? Clearly the answer is NO. Even when a blind guy (Bryan) tries to make a point through humor, people in the accessibility community go on a ragefest about people “making light of accessibility”.

Instead of productive, collaborative discussion about bringing accessibility into the mainstream, accessibility people are too busy fighting with each other and using social media as a sounding board to name and shame everyone whose products aren’t perfectly accessible. I’ve said it before, we need to put the pitchforks down. We need to understand “perfection” isn’t possible and work on making “better” happen instead. For this, I propose we begin focusing on two very simple questions:

Do you agree that it is acceptable to prevent certain classes of users from using your ICT product or service?

This requires only a one word answer: “Yes”, or “No”. I’ve asked people this question before and I often get answers other than Yes or No. People will say “But that depends on [any number of red herring conditions]” and I always try to redirect to the original question. To move the conversation forward, we need to know whether the other person thinks its OK to discriminate. Hint: Nobody thinks that is OK. Or, at least, they won’t admit it in public.

Follow-up: What can you do now to ensure that access for all people is improved?

From there, we can assume that the other party is prepared to move forward with accessibility. We don’t need to continue rambling on about the various reasons why accessibility is good. We’ve gone past that and now its time to act.  But, it isn’t reasonable to expect perfection immediately. It also isn’t reasonable to expect that the necessary resources and knowledge will just magically appear out of nowhere. So the follow up is: given your current knowledge and resources, what action can be taken immediately that will deliver a demonstrable positive result for users? Incremental betterment is far better than impatient expectations of perfection. As we make improvements to what we do and how we do them, we can make things better.

While I’ve previously spent a lot of time writing about selling accessibility I really think the most effective approach is to limit the “selling” to one question. We don’t need to sit there and spin our wheels with red-herring distractions like ROI. Is it right to discriminate or not? No? Awesome. Now what are we going to do now to make sure we don’t discriminate? Do that.

Stop selling. Start leading

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Longdesc – Where are the alternatives?

Non-text Content is “any content that is not a sequence of characters that can be programmatically determined or where the sequence is not expressing something in human language”. Mostly what comes to mind when discussing non-text content are audio/ video content, images, or other graphical content not necessarily image-based. WCAG 1.1.1 Calls for alternatives for non-text content. For basic images, presented in the <img> element, the ‘alt’ attribute is the most-frequent means of providing an alternative. The content you place in the ‘alt’ attribute may vary depending on the image and context but generally “…the general consensus is that if the text alternative is longer than 75-100 characters (1 to 2 sentences), it should not be considered a short text alternative and should not be presented using the alt attribute or the figcaption element…”. In the vast majority of cases, that amount of text should cover you rather well in providing a good, clear, and concise alternative for the image. But what if the image is complex? What if the information portrayed in the image can’t be described effectively in 75-100 characters? One suggestion is to use the longdesc attribute.

Historically, support for longdesc has been rather spotty. Back in 2000, WCAG 1.0 recommended using longdesc, but also acknowledged the lack of browser support and also recommended the use of what was called a D link. In practice, the D link probably saw more popularity than longdesc and its recommendation was pretty pervasive.

Over time longdesc support among user agents has improved, having been added to Opera, IE, and Firefox. Movement toward supporting longdesc has been made by Chrome’s dev team. Screen readers such as JAWS, Window-Eyes, NVDA, and Orca support it, as do many authoring tools. That hasn’t stopped the pushback on longdesc and Apple has stated they have no plans to implement longdesc.

longdesc (as implemented) is a poor solution to a real problem

There should be no argument in anyone’s mind that there’s a real issue that needs to be addressed: effective and informative alternatives to complex and/ or detailed non-text content. There are loads and loads of images on the web which convey things like charts, graphs, and diagrams. How do you describe, in 75-100 words, the components and operation of a 4-stroke engine?

Example complex image: cutaway of 4-stroke engine

Easy. You can’t. There may be other ways to describe this, such as in the same page as the image. Try getting that one past the content people. But longdesc – in its current form – is a crappy way to do this. See, the problem with longdesc is that it is basically only useful for screen reader users. Longdesc essentially locks out sighted users entirely. The image with longdesc isn’t placed in the tab order and there’s no visual affordance provided to indicate the existence of longdesc. Firefox’s implementation provides access to the long description via the context menu which is great if you knew the image has a long description, which you likely won’t if you’re not a screen reader user. As it stands, longdesc is wholly useless to people with cognitive disorders, which is another population that could seriously benefit from long descriptions.

Where are the alternatives?

Ultimately, I have to agree with many of the criticisms of longdesc. But that doesn’t mean I agree with the notion of just doing nothing, either. The fact remains that some images require longer descriptions than the 75-100 characters available to the alt attribute and despite the protestations of longdesc’s detractors, there don’t appear to be any proposed alternatives for implementing a mechanism of supplying long descriptions for non-text content, beyond saying “Fuck it, leave it to the web authors to figure that out”.

Two ideas

Unfortunately, that’s where we are right now if we want a viable means of supplying long descriptions. With Safari/ VO support out of the picture, we can’t rely on partially supported features. Or can we?

See here’s the thing about HTML: You can actually put whatever you want in your markup. You can make up your own elements or attributes, you can even add bogus values in attributes. That doesn’t mean it’ll do anything, but you can put it there. For instance, you can add the old <blink> tag to your page, but it won’t actually blink anymore in any major browser. Similarly, you can still add longdesc to your images. The attribute will still be in the attributes node of the image object. Because it is in the DOM you, the developer, can do something useful with it. Here are two possible ideas:

Dirk Ginader’s longdesc plugin places an unobtrusive icon over the image which represents a link to the long description. Activating the link replaces the image with the long description. Dirk hasn’t done much continued development on the plugin, but its a great starting point and I like the concept.

Today, I created a new Polymer-based Web Component called image-longdesc. It is basically just a different approach to my image-caption component. It places a link to the long description under the image. Remember my 4-stroke engine example? Here it is with a caption and longdesc link:

Screenshot: Prior engine example as a web component with caption and longdesc link under the image

Are these ideas perfect? I don’t know. What I do know is that we’ve yet to see longdesc’s detractors come up with any viable alternatives that address the very real need for suitable alternatives to complex non-text content.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Feature misuse !== feature uselessness

Ugh. Longdesc. For those who don’t follow such things, the fight over the longdesc attribute in HTML5 goes back to (at least) 2008. Back then, the WHATWG was also considering eliminating the alt attribute, the summary attribute, and table headers. Ian Hickson’s blatant and laughable egotism led him to believe he knew more about accessibility than the many actual accessibility experts he was arguing with. In this context, it is no wonder that a lot of people have gotten to the point of just being sick of the topic of longdesc, instead preferring to concentrate on more impactful concerns in accessibility.

While I agree with a lot of the arguments made in Edward O’connor’s Formal Objection to advancing the HTML Image Description document along the REC track I do feel strongly compelled to address the use of the tired argument that I can summarize as “Because web developers misunderstand or misuse a feature, that means the feature must be bad”. In fact I first responded to this type of argument 6 years ago on the HTML5 mailing list in which I stated:

The notion that the decision to keep or eliminate an attribute based on whether it gets misused by authors is amazingly illogical. I would challenge the author to eliminate every element and attribute which is “widely misused” by authors.

For nearly a dozen years now, I’ve been employed in a capacity which gives me a day-to-day glimpse of how professional web developers are using markup. I see HTML abuse on a daily basis. Bad HTML permeates the web due to ignorant developers and is exacerbated by shitty UI frameworks and terrible “tutorials” by popular bloggers. In my years as an accessibility consultant I’ve reviewed work on Fortune 100 websites and many of the Alexa top 1000. I’ve reviewed web-based applications of the largest software companies in the world. The abuse of markup is ubiquitous.

  • I’m working with a client right now who has over 1600 issues logged in their issue tracking system just related to accessibility. Several dozen of those issues related to missing ‘name’ attributes on radio buttons.
  • Across 800,000 tested URLs, Tenon.io has logged an average of 42 accessibility issues per page. This number is statistically significant
  • The average length of an audit report by The Paciello Group is 74 pages long. I recently finished a report that was over 37,000 words long

Regardless of your position on longdesc, citing developer misuse is little more than a red herring.

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Video: Prioritizing Remediation of Accessibility Issues (from ID24)

The Paciello Group has recently uploaded all of the sessions from the Inclusive Design 24 event that was held on Global Accessibility Awareness Day. My session was titled “Prioritizing Remediation of Accessibility Issues” as described:

Once you have a report from an accessibility consultant, automated tool, or your QA team, now what? Not all issues are created equal. This session will discuss the various factors which must be weighed in order to make the most effective use of developer time and effort while also having the best possible results for your users.

Watch my video below, including repeated cameos by my mastiff, Poppy and take a look at the whole playlist

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Announcing the Viking & the Lumberjack

At CSUN 2014, Billy Gregory and I gave a presentation titled No Beard Required. Mobile Testing With the Viking & the Lumberjack. The presentation was an absolute disaster. Our approach to the presentation was to “wing it”, showing how to test with various mobile technologies. Thing is, none of the mobile technologies actually cooperated with us. The good news – for us at least – is that Bill and I were entertaining enough for Mile Paciello to have a crazy idea of his own: a web video series called, appropriately, the Viking and the Lumberjack. Today is the day we launch our first episode of (hopefully) many episodes where Billy Gregory both entertain and inform. We hope you enjoy!

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

Video of my talk from Open Webcamp 2014

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

[Part 1] The Newb’s Crash Course in Test Driven Development, including Git, Grunt, Bower, and Qunit

Since we launched the private beta of Tenon.io, the feedback has been really positive and, frankly, energizing. But we have more work to do before we’re ready to open the whole thing up for the public. Much of that work centers around tests. We need more tests. Right now, we have a backlog of about 65 tests to write. Some of those tests require additional utility methods so we can keep things DRY. As I was writing one such method, I thought it might be a good topic for an intro to what I call modern web development techniques. I covered this in my recent presentation at Open Webcamp, titled The new hotness: How we use Node, Phantom, Grunt, Bower, Chai, Wercker, and more to build and deploy the next generation of accessibility testing tools. (an obnoxiously long title, I know).

In this tutorial I’m going to go over the basics of starting a project, scaffolding out a project, and give a very quick intro to Test Driven Development. There are a ton of details, nuances, and considerations that I’m going to mostly gloss over, because this tutorial touches on a lot of things and each of them are worthy of multiple blog posts in and of themselves. There are a ton of links throughout the content of this post where you can find a lot more info on these various topics and I really encourage you to explore them.

The general principle of Test Driven Development is this: If you know your requirements, you know the acceptance criteria. Write tests (first) which test whether you’ve met the acceptance criteria. Write the code that passes the test. This approach has multiple benefits, especially when it comes to quality. If you write good tests and you’re passing those tests, then you’re avoiding bugs. Also, as new code is added, if the new code passes its own tests but causes prior tests to fail, then you avoid the new bugs as well. This assumes, of course, that you’re writing good tests. At Tenon, we’ve seen our own bugs arise from tests that didn’t take into consideration some edge case scenarios. In my opinion, this demonstrates the best part about TDD, because all we needed to do was add a new test fixture that matched the failed case, modify the code, checked to make sure we passed the test, and the bug was squashed.

Some background preparation

In this tutorial I’m only making a really tiny jQuery plugin, but we’re going to pretend it is an actual project.
Every single project I embark on has a local development environment and its own repository for version control. Over many years, I’ve learned the hard way that I can’t have a single dev server for everything I do and version control is critical. This is because chances are pretty high that I’ll eventually need to re-use, refactor, or expand on something, even if I consider it purely experimental at the time.

So, the first step for me is always to create the project and set up the version control. I use Git for version control and I use Bitbucket to host the repositories. I type these items in Terminal to get everything started:

mkdir /path/to/your/project
cd /path/to/your/project
git init
git remote add origin git@bitbucket.org:karlgroves/jquery-area.git

So, for the newbs: I’ve made the folder to hold the project using mkdir, I went to it using cd, I initialized the repository using git init and then I added the remote location using git remote add origin. The next step I often take is to set up the new host in MAMP but in this case I don’t need to since it is just a small jQuery plugin being written.

Every bit of code discussed in this tutorial can be found on Bitbucket at https://bitbucket.org/karlgroves/jquery-area. To download & use that code to follow along, do this:

mkdir /path/to/your/project
cd /path/to/your/project
git clone git@bitbucket.org:karlgroves/jquery-area.git

Every feature must be driven by a need

I’m a very strong proponent of Agile software development processes and a very strong believer in requirements driven by a user-oriented need, often referred to as a User Story. Good user stories follow the INVEST pattern. Once a User Story has been defined, it is broken down into the distinct tasks that need to be performed to complete the story. For most user stories, there are likely to be multiple tasks. For this tutorial our user story is simple:

As a test developer, I want to be able to create tests which check for an actionable object’s dimensions.

Given the above, we then need to determine what tasks must be performed in order to complete the story. Since we’re testing for an actionable object’s final dimensions – and because we use jQuery – we want to test the values returned for .innerHeight() and .innerWidth(). This is because border and margin aren’t part of the “hit area” for actionable items. We also want to determine the overall area of the object. So our task in this case is pretty simple:

Create a jQuery plugin that will calculate an object’s dimensions

We determined this to be a story with a single task because it only requires that. But we also determined that, down the road, we may need more than just actionable objects, so we’ll let it be used for any object. In reality this plugin will only work for HTML elements that can take up space. Some elements like <br> don’t take up any space, but we won’t be using this for them.

Set up the project

In reality, this sort of simple plugin doesn’t require its own project, but go with me here.

The first step, after creating a local folder and setting up the Git repository, is to “scaffold” out the project or, get the basic structure in place. One of the best ways out there for this is to use Yeoman. Depending on the nature of your project, there may already be an official Yeoman Generator for your type of project. In fact, Sindre Sohrus has already created one for jQuery plugins. No matter your approach, it makes sense to start out with a basic structure for your project.

I didn’t use the Yeoman generator for this project, mostly because I have my own set of preferences. The best approach, if I planned on making a habit of making jQuery plugins, would be to fork the Yeoman Generator and use it as a basis for my own. Either way, here’s how my structure winds up:

  • Folders
    • src – this is where the source file(s) go. For instance, in a big project involving multiple files, there may be several files which get concatenated or compiled (or both) later
    • dist – this is the final location of the files to be used later. For instance, a project like Bootstrap may have several files in ‘src’ which get concatenated and minified for distribution here in the ‘dist’ folder
    • test – this is where the unit test files go
  • Files in the project root. This holds many of the project related files such as configuration files, etc. Many of these files allow other developers involved in the project to work efficiently by setting up shared settings at the project level.
    • .bowerrc – this is a JSON formatted configuration file. There are a lot more interesting things you can do with this file, but all we’re going to do is tell it where our bower components are located.
    • .editorconfig – this is a file to be shared with other developers to share configuration settings for IDEs for things like linefeed styles, indentation styles, etc. This (helps) avoid silly arguments over things like tabs vs. spaces, character encodings, etc.
    • .gitattributes – this is another file allowing you to do some project-level configuration
    • .gitignore – this lets you establish some files to be ignored by git. You can even find tons of example .gitignore files
    • .jshintrc – One of the Grunt tasks we’ll be talking about is JSHint : “… a community-driven tool to detect errors and potential problems in JavaScript code and to enforce your team’s coding conventions.” The options for the JSHint task can either be put directly into your Gruntfile or into an external file like this one.
    • jscs.json – this is a configuration for a coding style enforcement tool called JSCS.
    • CONTRIBUTING.md – common convention for open source projects is to add this file to inform possible contributors how they can help and what they need to know.
    • LICENSE – another convention is to provide a file as part of the repository which explains the appropriate license type for the project.
    • README.md – finally, in terms of convention, is the README file which provides an overview of the project. The README file often includes a description of what the project is all about and how to use it.
    • jquery manifest file (area.jquery.json) – If you plan on publishing a jQuery plugin, you need to create a JSON-formatted package manifest file to describe your plugin
    • package.json – This JSON-formatted file allows you to describe your project according to the CommonJS package format and describes things like your dependencies and other descriptive information about the project
    • Gruntfile.js – This file allows you to define and configure the specific tasks you’ll be running via Grunt.

Task Automation via Grunt

As described above, we’re going to be using Grunt to automate tasks. To use Grunt you first need to install Node. Once you have node installed, all you need to do is install Grunt via the Node Package Manager (npm) like so:

npm install -g grunt-cli

If you were starting your project from scratch, you’d want to find the plugins you want and follow each one’s instructions to install. Usually the install requires little more than running:

npm install PLUGIN_NAME_HERE --save-dev

So, installing the Grunt JShint plugin would be:

npm install grunt-contrib-jshint --save-dev

For this tutorial, if you’ve cloned the repo for the jquery area plugin, run this instead:

npm install && bower install

This will install all of the dev dependencies for the Grunt tasks as well as installing the jQuery and QUnit files needed for testing.

Let’s back up a second: what is Grunt?

Grunt is a “JavaScript taskrunner”. The goal of Grunt is to facilitate the automation of developer tasks. I discussed automation in an earlier blog post. Like any other tool, the purpose is to allow us to either do things more efficiently or do things we could never be able to do in the first place:

As tools and technologies continue to evolve, mankind’s goals remain the same: make things easier, faster, better and make the previously impossible become possible.

There are some tasks that developers do over and over during their regular day-to-day work which are made far easier through automation. There are even some automation-related tasks developers do which can be further automated. In this regard, Grunt can be seen as a way to apply DRY even to human effort. I’m a huge fan of that idea.

The specific Grunt plugins we’ll be using are:

  • Load Grunt Tasks (load-grunt-tasks) – this lets us do some lazy loading of all of the Grunt tasks.
  • Time Grunt (time-grunt) – this will show how long each task takes. This can be pretty important when running a lot of tasks or a single task (like a bunch of unit tests) that takes a long time.
  • Clean (grunt-contrib-clean) – we’ll be using this one to simply clean out the ‘dist’ folder prior to adding the final compiled plugin
  • Watch (grunt-contrib-watch) – this is a hugely beneficial task for us, because it will allow us to automatically run specific tasks whenever new changes are saved. For instance, we can set it up so that whenever the plugin file is changed, it runs JSHint and JSCS on it.
  • JSHint (grunt-contrib-jshint) – This task does some syntax checking of JavaScript files to detect potential errors. This kind of task can help you avoid pretty silly bugs based on simple mistakes
  • JSON Lint (grunt-jsonlint) – A bit like JSHint, this does syntax checking on JSON files. For us, this specifically saves us from problems with our configuration files which would in-turn cause issues with our tasks running properly.
  • JSCS Checker (grunt-jscs-checker) – JavaScript Code Style Checker, or JSCS, allows us to enforce some coding style conventions for your project.
  • QUnit (grunt-contrib-qunit) – QUnit is the JavaScript unit testing framework we’ll be using.
  • Connect (grunt-contrib-connect) – This task sets up a connect server
  • Uglify (grunt-contrib-uglify) – This task will do code minification on our plugin file and place it in the ‘dist’ folder.

Our Workflow: How Grunt and Qunit come into play

In this scenario we’re going to have some ‘watch’ tasks that run while we’re developing, primarily to make sure we don’t make silly coding style mistakes. Along the way, we’ll do test-driven development: defining our acceptance criteria and coding to meet them. Grunt allows us to automate the performance of tasks that we, as developers, do repetitively. As I’ve said in other posts:

In any case where a capability exists which can replace or reduce human effort, it only makes sense to do so. In any case where we can avoid repetitious effort, we should seek to do so.

This is exactly where tools like Grunt and Gulp truly shine. Instead of repetitively saving the files, then running jshint, then jscs, then qunit, then minifying the source, then copying it over to our dist folder, we can avoid that tedium through automation. We can establish a series of tasks, configured to our preferences, to be automatically run while we work, thus increasing our efficiency and quality.

Up next in Part 2: Actual TDD

At 2200+ words already, we’ll have to reserve the discussion of the TDD process to Part 2. We’ll go through defining the tests, creating fixtures, and writing the code. Stay Tuned!

I’m available for accessibility consulting, audits, VPATs, training, and accessible web development, email me directly at karl@karlgroves.com or call me at +1 443-875-7343

What an incredible start to today

This morning I’m sitting here in bed as I do virtually every morning: Working on one of many programming projects. Sometimes they’re “official” projects and sometimes just experiments. The big difference today is that I’m in San Jose, CA for Open Web Camp. About 45 minutes ago I spoke briefly with my wife. As I put my phone down I noticed a missed call and voicemail from a number I didn’t recognize.

Usually when I get calls like that, they’re from people who want a VPAT written urgently. They had arrived on this site after Googling for “VPAT”, arrived on this post and then called me to tell me they need a VPAT written urgently. I send them over to Brian Landrigan over at The Paciello Group and that’s that. But this call was different:

“I’m not sure if this is the right number to call or not. Based on your voicemail greeting, you sound like you might be younger than the person I’m looking for, but I’m looking for a relative of Fred T. Groves…”

I hung up on the voicemail and called the number right back.

Fred Groves was my uncle. He was a US Marine who died on Iwo Jima in World War II. He was just a boy when he died. My grandfather had signed a special permission form for Fred to join the Marines at 17 and Fred was barely 18 when he died on Iwo Jima. In other words, he shouldn’t have even been at Iwo Jima if he had waited until he was 18. It tore my grandfather apart. Naturally I never got to meet Fred. In fact, my own father was only 4 years old when Fred died.

I spent a few moments on the phone with the guy who had called me. He must’ve been around my age and was calling on behalf of his father-in-law and proceeded to tell me about the stories his father-in-law had about Fred and how he tried to “look after” Fred because he was so young. He was there – literally there – at Fred’s last moments on this planet. To this day, nearly 70 years after Iwo Jima, this guy’s Father-in-law still talks about Fred.

It makes me both proud and sad to have had a relative that touched others so deeply and yet to have been lost so early.