A team from Carnegie Mellon created a platform to translate memes into sounds rather than text as a way to convey sentiment through music and sound effects for people with visual impairments. Credit: Carnegie Mellon University

People with visual impairments use social media like everyone else, often with the help of screen reader software. But that technology falls short when it encounters memes, which don't include alternate text, or alt text, to describe what's depicted in the image.

To counter this, researchers at Carnegie Mellon University have developed a method to automatically identify memes and apply prewritten templates to add descriptive alt , making them intelligible via existing assistive technologies.

Memes are images that are copied and then overlaid with slight variations of text. They are often humorous and convey a shared experience, but "if you're blind, you miss that part of the conversation," said Cole Gleason, a Ph.D. student in CMU's Human-Computer Interaction Institute (HCII.)

"Memes may not seem like the most important problem, but a vital part of accessibility is not choosing for people what deserves their attention," said Jeff Bigham, associate professor in the HCII. "Many people use memes, and so they should be made accessible."

Memes largely live within platforms that have barriers to adding alt text. Twitter, for example, allows people to add alt text to their images, but that feature isn't always easy to find. Of nine million tweets the CMU researchers examined, one million included images and, of those, just 0.1 percent included alt text.

Gleason said basic computer vision techniques make it possible to describe the images underlying each , whether it be a celebrity, a crying baby, a cartoon character or a scene such as a bus upended in a sinkhole. Optical character recognition techniques are used to decipher the overlaid text, which can change with each iteration of the meme. For each meme type, it's only necessary to make one template describing the image, and the overlaid text can be added for each iteration of that meme.

A team from Carnegie Mellon created a platform to translate memes into sounds rather than text as a way to convey sentiment through music and sound effects for people with visual impairments. Credit: Carnegie Mellon University

But writing out what the meme is intended to convey proved difficult.

"It depended on the meme if the humor translated. Some of the visuals are more nuanced," Gleason said. "And sometimes it's explicit and you can just describe it." For example, the complete alt text for the so-called "success kid" meme states "Toddler clenching fist in front of smug face. Overlaid text on top: Was a bad boy all year. Overlaid text on bottom: Still got awesome presents from Santa."

The team also created a platform to translate memes into sound rather than text. Users search through a sound library and drag and drop elements into a template. This system was made to translate existing memes and convey the sentiment through music and sound effects.

"One of the reasons we tried the audio memes was because we thought alt text would kill the joke, but people still preferred the text because they're so used to it," Gleason said.

Deploying the technology will be a challenge. Even if it was integrated into a meme generator website, that alt text wouldn't be automatically copied when the image was shared on social media.

"We'd have to convince Twitter to add a new feature," Gleason said. It could be something added to a personal smartphone, but he noted that would put the burden on the user. CMU researchers are currently working on related projects, including a browser extension for Twitter that attempts to add alt text for every image and could include a meme system. Another project seeks to integrate alt text into the metadata of images that would stay with the image wherever it was posted.

This work was presented earlier this year at the ACCESS conference in Pittsburgh. Other researchers involved in the research include HCII postdoctoral fellow Amy Pavel, CMU undergraduate Xingyu Liu, HCII assistant professor Patrick Carrington and Lydia Chilton of Columbia University.