Visitors Now: | |
Total Visits: | |
Total Stories: |
Lee Rannals for redOrbit.com – Your Universe Online
Researchers writing in the journal Nature have uncovered some secrets regarding speech motor control.
University of California at San Francisco researchers wrote about how they uncovered a better understanding of the neurological basis of speech monitor control, including brain regions that control lips, jaw, tongue and larynx.
This new research helps to shed the light on an ability that is unique to humans, but very poorly understood. The work could eventually help scientists in creating computer-brain interfaces for artificial speech communication.
“Speaking is so fundamental to who we are as humans – nearly all of us learn to speak,” said senior author Edward Chang, MD, a neurosurgeon at the UCSF Epilepsy Center and a faculty member in the UCSF Center for Integrative Neuroscience. “But it’s probably the most complex motor activity we do.”
During the study, the team recorded electrical activity directly from the brains of three people undergoing brain surgery at the university. They used this information to help determine the spatial organization of the “speech sensorimotor cortex.” The patients read from a list of English syllables, such as bah, dee, and goo.
“Even though we used English, we found the key patterns observed were ones that linguists have observed in languages around the world – perhaps suggesting universal principles for speaking across all cultures,” said Chang.
They applied a new method known as “state-space” analysis to observe the complex spatial and temporal patterns of neural activity in the speech sensorimotor cortex. After this, they found that this area has a hierarchal and cyclical structure that exerts a split-second, symphony-like control over the tongue, jaw, larynx and lips.
“These properties may reflect cortical strategies to greatly simplify the complex coordination of articulators in fluent speech,” said Kristofer Bouchard, PhD, a postdoctoral fellow in the Chang lab who was the first author on the paper.
Similarly, MIT researchers just published a paper in the journal Frontiers in Psychology about how they believe human language is a grafting of two communication forms found elsewhere in the animal kingdom. First, they say, the elaborate songs of birds is taken into consideration, then, information bearing types of expression seen in other animals.
“It’s this adventitious combination that triggered human language,” says Shigeru Miyagawa, a professor of linguistics in MIT’s Department of Linguistics and Philosophy.
Birds sing songs that have one meaning, whether about mating, territory or other things. However, other types of animals have bare-bones modes of expression without the same melodic capacity. Bees are able to communicate visually, using waggles to indicate sources of foods to their peers. Primates use a range of sounds, comprising of warnings about predators and other messages.
Humans combine both bird and other animal systems, and are able to communicate essential information, but with a melodic capacity and an ability to recombine parts of our uttered language.
RedOrbit also recently reported about how University of Zurich researchers pointed out that the calls from the banded mongoose are structured similar to human speech. Biologists found during this study that the monosyllabic calls of banded mongooses contain structures that convey different information. They also make the analogy that the mongoose’s sound expression structure bears a similarity to the vocabulary of human speech.
redOrbit.com
offers Science, Space, Technology, Health news, videos, images and
reference information. For the latest science news, space news,
technology news, health news visit redOrbit.com frequently. Learn
something new every day.\”
2013-02-22 10:16:09