Question: A couple of years back, there started a discussion of the "foxp2" gene affecting speech. Regardless of the nature of whatever constellation of genes allows human speech, is it reasonable to extrapolate that this constellation also allows what we humans call "music". That is, music and speech are both approximately equally rich devices for conveying information, probably use the same parts of the body and brain, and have similar adaptive rationales. Some languages, perhaps all languages, convey meaning with tone sequences. I can imagine writing a computer program which would translate music into (unrecognizable?) speech, and vice versa.
I don’t know if you’ve heard of Chuck Snowdon’s work, he’s in Madison in the Psych department. He and a collaborator who is a cellist and composer put together an interesting study with tamarins.
Tamarins make different vocalizations in different contexts – characteristic of their emotional state – excited versus calm, anxious, etc. Chuck’s collaborator composed “music” that follows the prosody patterns of these tamarin vocalizations. He then played the music on the cello and resampled the frequencies to match the tamarin vocal range – basically raising the notes two and a half octaves.
They found that when they played the music to the tamarins, it elicited the appropriate responses – in other words, they developed a musical analog of tamarin communication. The implication is that human music may elicit emotional responses in similar ways because of its similarities to human vocalizations.
Now the question is whether language is connected to this. Musical compositions often have a hierarchical structure and repeated elements, much like language. It seems plausible to me that the ability to make music may have much in common with language. So maybe a “translator” from one to the other might yield interesting results.