Online: | |
Visits: | |
Stories: |
Peter Suciu for redOrbit.com – Your Universe Online
A planetarium visit takes more than sight to be fully appreciated.
When visiting a planetarium deaf students are left in the dark when it comes to a tour of outer space. With the lights off they can’t see an American Sign Language (ASL) interpreter’s narration, but with the lights on the students can’t see the constellations of stars projected overhead.
Now a group of researchers at Brigham Young University has come up with a solution via the “Signglasses” Project. Professor Mike Jones and his students developed a system that can project sign language narration onto several types of glasses including Google Glass.
While many times researchers must find test subjects, in this particular case one of the researchers was happy to volunteer. Tyler Fougler and a few other student researchers were born deaf and this project gave them a chance to hear what they were missing.
By sheer coincidence, the only two deaf students who had taken Jones’ computer science class also signed up for the National Science Foundation-funded research project. Jones is director of the Computer Generated Natural Phenomena Lab in the BYU CS Department.
“My favorite part of the project is conducting experiments with deaf children in the planetarium,” Tyler wrote in a statement. “They get to try on the glasses and watch a movie with an interpreter on the screen of the glasses. They’re always thrilled and intrigued with what they’ve experienced. It makes me feel like what we are doing is worthwhile.”
For this study the BYU team tested the system during a field trip of high school students from the Jean Messieu School for the deaf. During the study researchers found that the way the deaf expect to use these visuals aids could be different from the way people are already using Google Glass.
Test subjects said that the signer should be displayed in the center of a lens, instead of at the top, which is how Google Glass displays video. The researchers found deaf participants preferred to look straight through the signer when they returned focus to the show.
Jones said that the potential for this technology also goes beyond planetarium shows, and currently the team is working with researchers at Georgia Tech to explore Signglasses as a literacy tool.
“One idea is when you’re reading a book and come across a word that you don’t understand, you point at it, push a button to take a picture, some software figures out what word you’re pointing at and then sends the word to a dictionary and the dictionary sends a video definition back,” Jones added.
He will publish the full results of the research next month at the Interaction Design and Children conference.
This type of display may not be limited to ASL in the future, and as Jones noted it could have uses outside the planetarium.
“Speech to text, which is getting very good, should be able to display the words someone is saying real time and significantly assist lip reading,” said Rob Enderle, principal analyst at the Enderle Group. “You could also feed a transcription into the headset much like translators provide translations to earphones for those listening to a talk in a foreign language and the visual image could either be a translated transcript or be a real time speech-to-text feed from the translator.
“You might want to make some changes to the display so it could show more text but this alone could prove invaluable,” Enderle told redOrbit. “Those that have to sign to speak could find that people with Google Glass but untrained in sign language might be able to load an application that used the camera to translate the hand signals so this could actually work both ways.”