“I envision a future where everything will be captioned, so the more than 300 million people who are deaf or hard of hearing like me will be able to enjoy videos like everyone else,” said Liat Kaver, a YouTube Product Manager focusing on captions and accessibility. “When I was growing up in Costa Rica, there were no closed captions in my first language, and only English movies had Spanish subtitles. I felt I was missing out because I often had to guess at what was happening on the screen or make up my own version of the story in my head. That was where the dream of a system that could just automatically generate high quality captions for any video was born.”
As YouTube grew, so did the number of videos with captions which now stands at over 1 billion. Kaver says that more than 15 million videos are watched each day with captions enabled.
Caption Technology
“One of the ways that we were able to scale the availability of captions was by combining Google’s automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions for videos,” says Kaver. “There were limitations with the technology that underscored the need to improve the captions themselves. Results were sometimes less than perfect, prompting some creators to have a little fun at our expense!”
Kaver says that one of their teams major goals has been to improve automatic caption accuracy via technological improvements in speech recognition, machine learning and increases in training data. “All together, those technological efforts have resulted in a 50 percent leap in accuracy for automatic captions in English, which is getting us closer and closer to human transcription error rates,” she says. “I know from firsthand experience that if you build with accessibility as a guiding force, you make technology work for everyone.”
Subtitles for the hearing impaired are commonplace in streaming media, including Netflix. But descriptive narration for the visually impaired is not nearly as ubiquitous.
Netflix is taking the first steps to remedy this, announcing a new audio description feature that will debut on its original series Daredevil – a series about a blind superhero. This is more than a coincidence, of course.
Netflix’s new narration feature will provide the visually impaired with descriptions of what’s going on on-screen – including things like physical actions, facial expressions, costumes, settings, and scene changes.
To access the feature, users can simply switch in on the same way they choose a soundtrack in a different language.
“Netflix is actively committed to increasing the number of audio-visual translations for movies and shows in our English-language catalogues. We are also exploring adding audio description into other languages in the future,” says the company.
“Over time, we expect audio description to be available for major Netflix original series, as well as select other shows and movies. We are working with studios and other content owners to increase the amount of audio description across a range of devices including smart TVs, tablets and smartphones.”
Netflix plans to expand the new feature to its original series like House of Cards and Orange Is The New Black.
Just a few days ago, with the launch of the new Daredevil series, a petition arose demanding that Netflix “make its new show about a blind superhero accessible to blind viewers”
“Netflix’s new show Daredevil is about a blind lawyer turned superhero — and it seems unfair that if he were a Netflix subscriber, he wouldn’t even be able to enjoy his own show! That’s because important accessibility features for the blind and visually impaired are missing from Netflix productions. They rely on a ‘visual description’ option that tells them what is happening on the screen, and Netflix doesn’t offer this feature for their original content,” said the petition.
Hopefully Netflix will work quickly to add this accessibility to other content, not just its original shows and movies .
This week, Facebook announced the launch of the Accessibility Toolkit, which gives a behind-the-scenes look at the company’s efforts in product usability for the blind and vision-impaired, and offers resources for helping other companies approach the subject.
“When Facebook formed its Accessibility Engineering team in 2011, we experienced the same daunting challenge that many companies face: How do you incorporate accessibility within the company’s existing engineering environment? Having spent the past few years working toward this goal, we’ve learned a lot along the way, and continue to learn each day,” writes Facebook’s Jeffrey Wieland on the company’s code blog.
The toolkit includes a guide through the component library, documentation, quality assurance, engineer training, communication/feedback and general culture around accessibility. It covers how Facebook itself monitors feedback and shares its work publicly, and how it integrates accessibility into QA processes and runs testing with people who have disabilities.
It also shares this touching video about how a blind mother uses Facebook to “break down stigma”.
Communication and Feedback
Facebook’s approach to communication and feedback involves utilizing its help center, its actual Accessibility Facebook page and Twitter account, usability studies, and internal feedback.
The help center dives into accessibility basics, such as the best ways to access Facebook while using assistive technology, using screen readers, navigating with keyboard shortcuts, and how adding a mobile number to an account can make it more accessible. It also gets into accessibility for privacy and account settings, profile and timeline, News Feed, iOS apps, video, photos, and more.
“At the beginning of 2014 we started running usability studies with people who use assistive technology,” Facebook says. “This is a fantastic way to get in-depth feedback and understand the entire experience of using Facebook. While many challenges for accessibility can be resolved at a detail level, we have to understand how people interact with our product from beginning to end to build a complete experience. Usability studies are also a great way to make accessibility less abstract and more tangible for engineers learning about it for the first time.”
Culture
Within the company, Facebook has a group specifically for people who are interested in the company’s accessibility efforts, and it’s used to gather feedback from staff members who use Facebook all day long. This is really just an extension of the “accessible culture” the company says it’s building.
“Building an accessible culture can only happen when accessibility is integrated into the initiatives and teams within your company,” it says. “This isn’t something that can be broken down into a series of steps. Instead, it’s a gradual shift in the way people think about accessibility internally. By constantly reinforcing the importance of accessibility and having spokespeople within the company delivering the same message, people will start to think about accessibility as a way of doing things in the company rather than as an afterthought.”
Facebook shares stories from staff about accessibility culture here.
Quality Assurance
The Quality Assurance portion of the toolkit explains how accessibility is part of the central QA process at Facebook, how accessibility improvements move from QA to Product Operations to Engineering, and how ownership is distributed across product and platform teams.
Engineer Training
“Our main goal is to help Facebook’s product teams working across platforms to build the most accessible experiences possible.,” the company says in the training portion. “For them to do this, they need to understand how accessibility works on their platforms. Most engineers coming from both industry and academia have had little exposure to the field of accessibility (there are exceptions, of course, but they are few). We can’t expect people who have never heard of accessibility to be equipped to build accessible software applications. So we train them.”
It runs down its guiding principles for its approach to education as well as its three formal training programs within engineering. All new engineers, for example, must go through a general six-week bootcamp, with those who work on the front end stack being required to go through an in-depth intro to Facebook’s web infrastructure, which includes accessibility best practices and additional related resources.
Component Library
The component library section of the toolkit illustrates how Facebook makes things like dialogs, typeaheads, and menus more accessible.
Documentation
Finally, the Documentation portion of the toolkit explains how the company incorporates accessibility into its vast amount of documentation. This includes using an FAQ format.
“We don’t cover all accessibility topics, just those we expect a generalist engineer to own. We don’t expect a generalist engineer to become a student of ARIA’s history, so we omit it,” it says.
Examples of things it expects a generalist engineer to own include adding labels to buttons, adding alt text for static images, making sure UI elements get focus, and using modern components from the Component Library.
Facebook is calling on other companies to share their approach to accessibility on the Facebook Accessibility page, and is hiring engineers to work in this area, including for WhatsApp.
Twitter announced that it has reached an agreement to acquire Bangalore-based ZipDial in an effort to make Twitter more accessible around the world. The company says the deal “significantly” increases its investment in India where Twitter is already seeing great growth, and also gives it a new engineering office in the country.
According to Twitter, the acquisition will help it combat high data costs for those getting online for the first time in countries like India, Indonesia, and Brazil, where many are getting Internet access on mobile devices.
“ZipDial has built a mobile platform that lets people follow and engage with content across all interfaces,” explains Twitter VP of Product, Christian Oestlien. “The user experience combines SMS, voice, mobile web, and access to mobile apps to bridge users from offline to online. For example, through ZipDial, it’s easy to engage with a publisher or brand by making a toll-free ‘missed call’ to a designated phone number. The caller will then begin receiving inbound content and further engagement on their phone in real time through voice, SMS or an app notification. These interactions are especially appealing in areas where people aren’t always connected to data or only access data through intermittent wifi networks.”
Twitter has already worked with ZipDial on the Indian elections, Bollywood film promotions and @MTVIndia’s #RockTheVote “Dial the Hashtag” campaign.
“Today, people across India use ZipDial’s platform to access great content, including cricket scores, audio programming, Tweets from their favorite Bollywood stars – and much more – on their mobile phones,” Oestlien adds. “Leading figures, including actors, politicians and athletes, also use the platform to instantly reach millions of citizens on Twitter through text and voice messages. By coming together with ZipDial, we’ll help more people around the world enjoy great and relevant Twitter experiences on their mobile phones.”
While ZipDial might help increase accessibility for Twitter, it should have a significant impact on the company’s advertising efforts. ZipDial says it has achieved over a billion connections with brands across 60 million users. The company ranked number 8 on Fast Company’s 2014 list of the world’s 50 most innovative companies.
In India, friends intentionally call each other, let it ring once or twice, and hang up. That’s their way of sending a signal, like “I’m home safe,” without being charged for a call in a country with pricey telecommunications and limited Internet accessibility. California native Valerie Wagoner moved to Bangalore, noticed the missed calls, and is now responsible for 416 million of them: That’s how many times people have used her company, ZipDial, to connect with brands including Gillette, Disney, Procter & Gamble, and IndiaInfoLine.
It works like this: She issues the brand a number, which it prints on its ads. Consumers call, hang up, and get a text or call in return—and thus are entered in contests, receive coupons, or place an order. In 2013, she expanded to Sri Lanka and Bangladesh, and is now setting up in Indonesia, Singapore, and the Philippines.
Wagoner said at the time, “Our thesis has engagement at its core and is truly designed for emerging markets consumers. It amplifies consumer-to-brand engagement and therefore data at a personalized level. This then enables targeted marketing of the right message to the right user at the right time, thus maximizing ROI and impact for brands.”
ZipDial will continue building upon its existing platform, and as a part of Twitter, it will be able to expand on a global scale.
“Our ambitious goal is to make Twitter’s unique, great content accessible to 100% of the world’s mobile users, including those in emerging markets who will be experiencing the mobile Internet for the first time,” the company says. ” We could not be prouder to join the flock.”
Gary Bourgeault, an analyst at Seeking Alpha, says ZipDial could help Twitter add value to non-logged-in users, and notes that the “missed call” advertising offering is a form of permission marketing.
“When consumers opt-in to a dropped call service, they are giving marketers permission to contact them. That signals a level of interest in future marketing communications — a vital datapoint for marketers,” adds Lara O’Reilly at Business Insider. “The missed call format could also extend to sponsored units — Twitter’s bread and butter when it comes to ad formats. Twitter could give brands the option to sponsor some of the news or information services signing up to use the format. Twitter already has its Amplify program, in which it partners with media and sports brands like the NBA and the NFL to showcase video and images from live events, complete with sponsorships from brands like American Express and McDonald’s. ZipDial gives Twitter the chance to extend this initiative beyond the handful of US partners that have joined it to date.”
The companies aren’t disclosing terms of the acquisition, but analysts are estimating it to be between $30 million and $40 million.
In honor of Global Accessibility Awareness Day, LinkedIn announced a few things it’s been doing to make the social network more accessible. These include improved site navigation, improvements to interaction with the service, and the addition of image descriptions.
“What started out as a few passion projects by members of LinkedIn’s web development team has now become the formation of our Accessibility Web Developer Task Force, dedicated to making LinkedIn user experiences inclusive and accessible,” says LinkedIn’s Sarah Clatterbuck (pictured).
As far as the navigation goes, she says, “Members who navigate with a keyboard can now better perceive where they are on a LinkedIn page and save time in moving between professional content and features.”
Additionally, realtime notifications are available to those navigating by keyboard, and actions like sending messages, and interacting with dialog boxes can be done quickly and easily with the keyboard or screen reader.
Image alt text is now being employed in all major areas of the site.
The company says it is currently working on an in-page navigation tool to help keyboard and screen reader users better navigate long pages.
Flipboard has launched an update to its Android app, which will enable users with Android smartphones to take advantage of the service’s audio category.
“This means that you can now enjoy everything featured in our Audio category in the Content Guide—including segments from NPR’s Fresh Air and PRI’s The World—and more: you can connect your Flipboard to your SoundCloud account to flip through your Stream, Likes, Sounds, Sets and Groups. Find something you like, and press play to listen via Flipboard,” the Flipboard team explains in a blog post. “The music note in the top bar of the section controls your audio experience.”
The company launched the audio feature back in May for iOS devices. It’s coming to Nook and Kindle devices in the next few days.
Android users can get the updated app from the Google Play store. The update also comes with some bug fixes and performance improvements.
Google has posted a video about ChromeVox, the company’s screen reader Chrome extension.
“Unlike most accessibility software, it is built using only web technologies like HTML5, CSS and Javascript. ChromeVox was designed from the start to enable unprecedented access to modern web apps, including those that utilize W3C ARIA (Access to Rich Internet Applications) to provide a rich, desktop-like experience,” Google says of ChromeVox. “This enables visually impaired users to experience the power of web applications while also giving developers a way to verify the accessibility of their web applications.”
The video covers set-up, using ChromeVox to listen to your web app, common pitfalls for web accessibility (and how to fix them), and how Google tests ChromeVox code.
Google is working on making Google+ more accessible. The company announced that it is launching a new app called Hangout Captions to help inegrate live transcription into Google+ Hangouts.
The following update from Google just hit Google+:
Live transcription integration for your Hangouts This is +Naomi Black from the Google Accessibility team. I've been working with ace Hangout developers +Robert Pitt, +Mohammad Eshbeata, and +Brian Aldridge on a new app to make communication between deaf and hearing participants easier in your Google+ Hangouts. By adding the +Hangout Captions app, you can either connect live text from a professional transcriptionist to your Hangout, or type right into a text box yourself to transcribe a Hangout for your friends. Right now, we only support professional transcription through StreamText and our "do it yourself" Basic Transcription. This is an early look at the app so you can tell us what you think. To find out more about the new +Hangout Captions app (and more importantly, try it out) check out the website: https://hangout-captions.appspot.com/ Or, try it right away by starting a Hangout with the app running by clicking this link: https://plus.google.com/hangouts/_/?gid=8064685913
Google is also making its Android operating system more accessible. The OS, unveiled at Google I/O last week, comes with new APIs for accessibility services to let developers handle gestures and manage accessibility focus as users move through on-screen elements and navigation buttons, using accessibility gestures, accessories, and other input.
“The Talkback system and explore-by-touch are redesigned to use accessibility focus for easier use and offer a complete set of APIs for developers,” Google adds on its Jellybean page. “Accessibility services can link their own tutorials into the Accessibility settings, to help users configure and use their services.”
Google held an accessibility for Android session at Google I/O, encouraging developers to consider accessibility more in their own apps, which should make for an overall more accessible Android experience:
With Jellybean, apps that use standard View components inherit support for the new accessibility features automatically, so developers don’t have to change their code.
Google has a lot more information about its accessibility features for its various products here.
Google has posted numerous sessions from Google I/O on YouTube. This one looks specifically at making your web apps more accessible:
The sessions starts with testing for accessibility, followed by advanced screen reader accessibility, then ChromeVox extensions, and APIs for low vision.
The description explains, “This session will help you learn through code samples and real world examples how to design and test your web apps for complete accessibility coverage. We will review APIs such as the Text-to-speech (TTS) API, tools like ChromeVox and ChromeShades and how Google products implement solutions today for users with disabilities.”
As long as we’re on the topic of Google and accessibility, the company has also added some more accessibility features of its own to the latest version of Android (Jellybean), which was unveiled at Google I/O.
New APIs for accessibility services let you handle gestures and manage accessibility focus as the user moves through the on-screen elements and navigation buttons using accessibility gestures, accessories, and other input. The Talkback system and explore-by-touch are redesigned to use accessibility focus for easier use and offer a complete set of APIs for developers.
Accessibility services can link their own tutorials into the Accessibility settings, to help users configure and use their services.
Apps that use standard View components inherit support for the new accessibility features automatically, without any changes in their code. Apps that use custom Views can use new accessibility node APIs to indicate the parts of the View that are of interest to accessibility services.
The new iOS 6 was announced this afternoon at Apple’s Worldwide Developers conference, and the updated operating system looks to have gotten tons of new integrated features. One new feature that might be overlooked, but certainly deserves some attention, is the new Guided Access mode for iOS devices.
Guided Access is a form of accessibility software for iOS. Apple has always been at the forefront of technologies dedicated to helping those with disabilities interact with Apple products. Guided Access will allow a parent or teacher the ability to have full control of how an iOS device can be used. For example, the home button and all other hardware buttons can be locked, motion sensitivity can be disabled, or a certain portion of the screen can be made inactive toward touch. In addition, the device can be locked into a single app. This means that iOS device can now reliably be used to test students or give reading assignments, without fear that they will lose focus and end up playing Angry Birds. Apple also mentioned that the devices could be locked-down in this manner and used for museum information apps.
It’s just like Apple to provide a simple, elegant solution to a tricky problem. It will be interesting to see what parents and teachers use Guided Access for, since the best uses for such technology are often found by the users rather than the designers.
Google uploaded a video of a self-driving car test, showing Steve Mahan, a legally blind man behind the wheel, and stopping at Taco Bell for a burrito.
Google says in the video description, “We announced our self-driving car project in 2010 to make driving safer, more enjoyable, and more efficient. Having safely completed over 200,000 miles of computer-led driving, we wanted to share one of our favorite moments. Here’s Steve, who joined us for a special drive on a carefully programmed route to experience being behind the wheel in a whole new way. We organized this test as a technical experiment, but we think it’s also a promising look at what autonomous technology may one day deliver if rigorous technology and safety standards can be met.”
“95% of my vision is gone. I’m well past legally blind,” says Mahan in the video. “You lose your timing in life. Everything takes you much longer. There are some places that you cannot go. There are some things that you cannot do. Where this would change my life is to give me the independence and the flexibility to go to the places I both want to go and need to go when I need to do those things.”
At the end of the video, it says it was produced in partnership with Morgan Hill Police Department and Santa Clara Valley Blind Center San Jose, CA.
As reported earlier this month, Google’s driverless cars have inspired new legislation in California, similar to legislation introduced last year in Nevada, which legalized autonomous cars, clearing the way for Google’s program. It’s actually kind of hard to believe this program has already gotten so far, but there’s no telling how far it will actually go, if enough legislation gets passed in other states, and eventually countries.
I have a feeling we’ll be seeing Google put out a lot more videos similar in nature to the one above, to promote the company’s efforts.
Mohamed Mansour, who’s actually a software engineer for RIM, has created a Chrome extension to make Google+ hangouts more accessible to blind people.
It uses text to speech technology, reading content from the chat box to the user.
“Today I was in a hangout, where I met a blind war veteran,” says Mansour no Google+. “It was really inspiring to see him use such technology. But it was really hard for him to use it since it wasn’t accessible friendly. I was writing on the chat on the left, and it was really difficult for him to read the text. I am really into Accessibility, and I worked on it a lot in Chromium in the past, and I really wanted to help him.”
“So I told Tim, ‘what if the chat spoke to you?’ Tim said, ‘that would be awesome’. When I saw the big smile on his face, I had to do it! So I created this extension for Chrome!” he adds.
In the description in the Chrome Extension gallery, it says the first version focuses on the blind, but Mansour hopes to add more features to make it even better and more accessible, and is looking for feedback to that effect.
Mansour notes in his Google+ post that he hopes Google will adopt this for Hangouts itself.
Earlier this week, we looked at some improvements Google made to its Hangouts feature in Google+, specifically in terms of accessibility and sign language. They improved video quality and made it so it’s easier to see signing.
Google has also been working on some other accessibility-related features for Google Docs and Google Calendar.
“This fall, as classrooms fill with the hustle and bustle of a new semester, more students than ever will use Google Apps to take quizzes, write essays and talk to classmates,” says Google Accessibility technical lead T.V. Raman. “Yet blind students (like blind people of all ages) face a unique set of challenges on the web. Members of the blind community rely on screen readers to tell them verbally what appears on the screen. They also use keyboard shortcuts to do things that would otherwise be accomplished with a mouse, such as opening a file or highlighting text.”
The company has been working with advocacy organizations for the blind to make its products more accessible, and has already improved keyboard shortcuts and support for screen readers in Google Docs, Google Sites and Google Calendar. On the Gmail Blog, the company provides some examples of how screen readers and keyboard shortcuts have been improved specifically in Google Calendar:
In your calendar lists, you can use the up and down arrow keys to navigate between your calendars. For each calendar in the list, you’ll hear its name and can use the spacebar to turn the calendar on or off. To remove a calendar from the list, use the delete key.
In the agenda view, you can use the up and down arrow keys to move between events and use the left and right arrow keys to move between dates. To expand an event and expose the event details, press enter. To go to the event details page, type ‘e’. To remove an event, press delete. Although agenda view provides the best screen reader experience today, we are also working on improved accessibility for other views.
In the guest list on the create/edit event page, you can navigate around using the up and down arrow keys. Use the spacebar to switch a guest’s status between optional and required. To remove a guest from the list, use the delete key.
Additional keyboard shortcuts make it easier to use Google Calendar no matter which view or screen you’re on. Type ‘c’ to create an event, ‘/’ to start a search, and ‘+’ to add a calendar.
For a complete list os shortcuts and more information about screen reader functionality, you can find more info in Google’s help center.
JAWS, VoiceOver and ChromeVox are all supported by Calendars.
Google says it will continue to improve products for blind users in the weeks and months ahead.
Google has made some improvements to Hangouts, specifically around sign language.
Google engineering director Chee Chew said in a Google+ post that Google has been “aggressively improving video quality and stability.”
“It’s still a huge challenge to transmit 10 video feeds to 10 end points, potentially all around the world,” says Chew. “We still have lots of improvements we want to make, but I hope you [have] see[n] a substantial improvement in video stability in the past several weeks. This will be a never-ending effort.”
“Second, as I hungout in signing hangouts, I also noticed that most people were trying to watch others sign from the thumbnail video,” says Chew. “Our voice activated video switching for the main video usually just stayed on whomever had the most background noise.”
Google added a “Take the Floor” feature, so you can have everyone mute their audio, hit “shift+s” when you want to sign something and do so when you see yourself in the main video. It only works when you’re muted though.
Last week, Google told some of its “trusted developers” that it would make available access to the Google+ APIs. While it may be a while until they are widely available, it will be interesting to see what other developers are able to do with Google+ and Hangouts in particular in terms of accessibility.
IBM has partnered with the Industrial Design Centre at the Indian Institute of Technology, Bombay on mobile web research. The initiative will focus on development of new designs of mobile device interfaces that can be used by people who are semiliterate or illiterate, as well as individuals who have limited or no access to information technology.
We were equally jazzed about Google Wave internally, even though we weren’t quite sure how users would respond to this radically different kind of communication. The use cases we’ve seen show the power of this technology: sharing images and other media in real time; improving spell-checking by understanding not just an individual word, but also the context of each word; and enabling third-party developers to build new tools like consumer gadgets for travel, or robots to check code.
But despite these wins, and numerous loyal fans, Wave has not seen the user adoption we would have liked. We don’t plan to continue developing Wave as a standalone product, but we will maintain the site at least through the end of the year and extend the technology for use in other Google projects. The central parts of the code, as well as the protocols that have driven many of Wave’s innovations, like drag-and-drop and character-by-character live typing, are already available as open source, so customers and partners can continue the innovation we began. In addition, we will work on tools so that users can easily “liberate” their content from Wave.
Nielsen reports that the mobile Internet is more popular in China that it is in the U.S. "Widespread ownership of mobiles is only a fairly recent development in China, but consumers there have fully embraced the technology and in some ways are using it more robustly than their American and European counterparts," says Shan Phillips, Vice President, Greater China, Telecom Practice, The Nielsen Company.
Nielsen also has another interesting report looking at who is buying the iPad, and asking if they will also buy an iPhone.
WordPress has introduced its own "like" buttons. Now readers can "like" posts, although I’d say for publishers, the Facebook "like" buttons will be a lot more effective for driving traffic. Still, it’s nice to provide as many gateways for engagement as possible (without getting too cluttered, anyway).
According to the Financial Times, Motorola and Verizon have teamed up on a "TV Tablet." This is a device with a 10-inch screen that users will be able to watch television on.
Reuters reports that Sharp intends to launch a 3D smartphone this year. This would feature a 3D panel that can be viewed without special glasses and would have a 3D capable camera.
According to Unwired Review, Samsung is considering puting touchscreen functionality on the back of a tablet. This is based on a patent application for a "mobile terminal having dual touch screen and method of controlling content therein".
Meanwhile, as Engadget writes, Microsoft has been teasing an as-of-yet unannounced product via Twitter, saying, ""Don’t be so touchy…flat is where it’s at," and offering a small partial image of some object. This may or may not be a trackpad.
Unless you count the time Ford arranged for a blind man to drive a Mustang on the company’s proving grounds, speed and special nods to accessibility don’t often go together. Today, however, Google made its speedy browser, Chrome, more accessible by introducing a new category of featured extensions.
ChromeVis, which was designed to improve the visibility of text, is perhaps the main attraction in the new "Accessibility" group. Engineer Rachel Shearer explains in the video below how it offers all sorts of thoughtful touches that go beyond basic magnification.
Then, as Jonas Klink, a product manager, noted on the Google Chrome Blog, "You will also find extensions like Chrome Daltonize that can help color blind users to see more details in web pages or gleeBox that provides alternatives to actions traditionally performed via the mouse such as clicking, scrolling and selecting text fields."
Klink also wrote, "To encourage more developers to incorporate best practices in accessibility when designing extensions, we’ve open sourced the code behind Chrome Vis and created relevant documentation."
So Google’s effort is likely to be the beginning of something big rather than a one-time acknowledgment. Or even a two-time acknowledgement, considering that a few more extensions are supposed to be on the way. And all sorts of people, including those who are just tired of seeing black-on-gray text or don’t like to reach for the mouse, should benefit.
We’ll try to provide an update when Google makes another move in this field.