Um espaço para partilha de ideias relacionadas com as práticas artísticas
e os seus efeitos terapêuticos, com destaque para a vertente musical

sexta-feira, 15 de novembro de 2013

How Music Can Reach the Silenced Brain

Words Spoken And Sung
There are several cases in which a patient has recovered speech through the systematic use of rhythmic patterning, leading first to recovery of familiar lyrics and words embedded in songs, then to self-initiation of normal, fluent speech.Image courtesy of Concetta M. Tomaino
Because music has parallels to spoken language, much research on music and the brain has zeroed in on the similarities and differences between them. The similarities could be clues to more successful methods of using musical cueing to stimulate similar language responses in people with brain injuries. One remarkable example of the functional difference between music and language, however, occurs in people who have suffered a left-side stroke, resulting in a type of aphasia where verbal comprehension still exists but the ability to speak or find the right words is lost. In these cases, the brain lesion is often located in what is called Broca’s area; speech is slow, not fluent, and hesitant, with great difficulties in articulation.  Yet, despite the loss of speech, many people with this type of aphasia can sing complete lyrics to familiar songs. This has usually been attributed to the separation of function of the left and right hemispheres of the brain, speech being dominant on the left and singing on the right.
Because many clinicians assume a complete separation of function between singing and speaking, they give little attention to the potential for using music to aid speech. But there are several cases in which a patient has recovered speech through the systematic use of rhythmic patterning, leading first to recovery of familiar lyrics and words embedded in songs, then to self-initiation of normal, fluent speech. In each case, however, this remarkable change had been attributed not to the music but to spontaneous recovery during the early months after the stroke.
A similarity shared by music and speech is what we call “prosody,” which includes the elements of stress, pitch direction, pitch height, and intonation contour, or inflection. People with nonfluent aphasia can perform a type of prosodic speech that includes the inflection and contour of previously known phrases. This speech differs, however, from propositional speech (which includes verbal expression of new thoughts and ideas) in its rate, discrete pitch, and increased predictability. Aniruddh D. Patel, Ph.D., a scientist at the Neurosciences Institute in California, theorizes that rhythm and song, which are inherently predictable, may create a “supra-linguistic” structure that helps cue what is coming next in an utterance.
Brain-imaging studies by Dr. Pascal Berlin, of the Service Hospitalier Frederic Joliot in France, and more recently by Dr. Burkhard Maess at the Max Planck Institute of Cognitive Neuroscience, used PET and MEG scans to determine that areas peripheral to the left language regions of the brain are involved in processing the singing of single words. Additional imaging studies suggest that some aspects of music and language are processed in both the right and left sides of the brain. In many patients who are able to carry over speech techniques from music, success seems to come from their increased ability to attend to sounds and to initiate them, perhaps because parallel mechanisms for these functions have been called into play by music and singing.

Nenhum comentário:

Postar um comentário