View Google's Blog to see video
Ken Harrenstien software engineer for Google announced that the company's accessibility team has developed a system for using automatic speech recognition technology [ASR] to create machine-generated automatic captions [auto-caps] for certain videos on YouTube. Every minute of every hour of every day, 20+ hours of video are uploaded to the Internet with the vast majority of the audio content inaccessible to the deaf or hearing impaired. The new technology automatically generates captions for user generated video. Mr. Harrenstein who is himself deaf stated that while the captioning is not yet perfect "compared to nothing this is wonderful". The viewer will be able to select the closed captioning icon to view the captions, and will also be able to make the text larger or smaller and to change the background color to maximize the readability of the captions. The system is being currently tested by the 13 partners involved in the initial launch, including the local institutes of UC Berkley, Stanford and UCLA, before being release to the public.
In addition to the automatic captions Google also announced the launch of "auto-timing" which makes it significantly easier for YouTube video makers who wish to create captions to do so manually without having any special expertise. Google's ASR is able to use a simple text file that contains all the words in the video to figure out when the words are spoken in the video and insert them in the appropriate place. This system is in place and Google asked for users to provide them feedback so that they can improve and develop the system as needed.
These two systems are seen as a major step in Google's stated mission of making the Internet available and accessible for all.