Can auditory errors and illusions better help us understand how the brain works? In episode 32 Mike Vitevitch from the University of Kansas talks with us about his research into the cognitive mechanisms underlying the Speech-to-Song auditory illusion. His open-access article “An account of the Speech-to-Song Illusion using Node Structure Theory” was published with Nichol Castro, Joshua Mendoza, and Elizabeth Tampke in June 8, 2018 issue of the open-access journal PLOS One.

Speech-to-Song Illusion - Mike Vitevitch
Speech-to-Song Illusion - Mike Vitevitch
Speech-to-Song Illusion - Mike Vitevitch Speech-to-Song Illusion - Mike Vitevitch
@rwatkins says:
Mike’s research into speech errors has the potential to inform how artificially intelligent systems such as Apple’s Siri in Amazon’s Alexa make sense of our speech. So, Ryan and I wrapped up our conversation by asking him his thoughts on the prospect of developing speech recognition systems which might rival that of humans.
@rwatkins says:
In addition to his lab-based experiments, Mike collects speech errors that people may say or overhear at his website: http://spedi.ku.edu/. One contributor, for example, reported mishearing “I have my keys in my purse” as “I have my peas in my purse.” Another reported mis-stating “I’ve booked the tickets” as “I’ve ticked the buckets.” Since undergraduate psychology courses typically focus on formal experimental methods of research, we asked Mike what other approaches he uses in teaching research methods to his students.
@rwatkins says:
“Effect size” refers to the extent to which variables investigated in a research study have a meaningful association with one another. In the 1960’s the statistician Jacob Cohen provided general guidelines which — for better or worse — have become the de facto standard for interpreting effects in light of their magnitude. As his study’s six experiments all had medium to large effects, we asked Mike his thoughts on the relevance of effect sizes when investigating relatively new phenomenon, such as the Speech-to-Song Illusion.
@rwatkins says:
Among the countless memes that the internet brings us, it was the “Yanni or Laurel” auditory illusion that dominated the early summer of 2018. In it, a recording of the word “laurel” was heard as “Yanni” among 53% of the 500,000 people polled on Twitter, or “Laurel,” which the remainder of respondents reported hearing. Mike, however, heard “Lonnie.” Given that such auditory illusions aren’t always perceived by listeners, we asked him why this might be the case with the Speech-to-Song Illusion as well.
@rwatkins says:
Mike and his team used intro psych students as subjects in their experiments. In episode 30 of Parsing Science we heard from Yune Lee about how deficiencies in hearing acuity may be linked to declines in people’s cognitive processing abilities, even among young people with normal hearing. So we were curious to hear how Mike and his team decided to select undergraduates to test Node Structure Theory’s ability to explain the Speech-to-Song Illusion, as well as whether they plan on investigating the phenomenon among older adults.
@rwatkins says:
When we left off, Mike was about to discuss what auditory illusions — such as the Speech-to-Song Illusion — say about how we interpret the world around us.
@rwatkins says:
Experiencing perceptual illusions can be a normal response when our perceptions don’t match what’s actually happening in the environment. Given this, we were interested in hearing what auditory illusions — such as the Speech-to-Song Illusion — say about how we interpret the world around us … as Mike explains after this short break.
@rwatkins says:
The experiment provided evidence in support of the idea that the repeated priming of syllable nodes contributes to a song-like percept in the Speech-to-Song Illusion. So we followed up by asking Mike what’s new in the field of bilingual research.
@rwatkins says:
Since people who don’t know a foreign language can’t form lexical nodes about those words, the team’s fourth experiment attempted to prime only syllable nodes by using words from a foreign language: Spanish. Ryan and I asked Mike how he and his team developed the Spanish words used in the experiment.
@rwatkins says:
Mike and his team ended up conducting six separate — but related — experiments to test whether Node Structure Theory could explain the Speech-to-Song Illusion. We wondered how they went about designing these studies, as well as what it was that each experiment set out to discover.
@rwatkins says:
Node Structure Theory has been used as an account of everything from normal memory and language processing to dysfunctional processing, as well as differences in processing due to aging or certain cognitive deficits. Next, Mike describes how he and his team decided to test the theory’s ability to explain what happens in the Speech-to-Song Illusion.
@rwatkins says:
Node Structure Theory is a model that describes people’s perceptions and actions, and is comprised of three main processes: priming, activation, and satiation. Doug and I were eager to learn how the model conceptualizes these three processes.
@rwatkins says:
Mike’s research interest is in how information pertaining to words is stored in memory, and how the organization of those words in memory enables us to access that information quickly and accurately. So, we began by asking Mike what got him interested in studying auditory illusions in the first place.
{{svg_share_icon}}
Click bottom of waveform to add your comments


 

Websites and other resources

    • “Psychology of Language” (left audio channel only):

 

Press coverage

The University of Kansas | PsyPost | Science Daily | The VergeIBTimes

Bonus Clips

Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.

Support us for as little as $1 per month at Patreon. Cancel anytime.

Patrons can access bonus content here.


We’re not a registered tax-exempt organization, so unfortunately gifts aren’t tax deductible.

Hosts / Producers

Ryan Watkins & Doug Leigh

How to Cite

Watkins, R., Leigh, D., & S. Vitevitch, M.. (2018, September 19). Parsing Science – Speech-to-Song Illusion. figshare. https://doi.org/10.6084/m9.figshare.7109084

Music

What’s The Angle? by Shane Ivers

Transcript

Mike Vitevitch: It’s these same mechanisms that explain speech perception, speech production, that explains other kinds of speech errors as well.

Doug Leigh: this is parsing science the unpublished stories behind the world’s most compelling science as told by the researchers themselves. I’m Doug Leigh. Ryan’s out with a cold this week but he’ll be back next time.

Leigh: In 2003, the British American psychologist Diana Deutsch released the spoken word CD phantom words and other curiosities, which contained this sentence:

(Diana Deutsch’s voice): This sounds as they appear to you are not only different from those that are really present. But they sometimes behave so strangely as to seem quite impossible.

Leigh:  While editing this recording in 1995, Deutsch accidentally left the phrase sometimes behave so strangely looping on a computer like this:

sometimes behaves so strangely
sometimes behaves so strangely
sometimes behaves so strangely
sometimes behaves so strangely

Leigh: Instead the phrase appearing to be spoken, Deutsch heard a melody in the sign-song cadence of her voice and dubbed this effect, the Speech-to-Song Illusion. Today, we’re joined by Mike Vitevitch from the University of Kansas. He’ll talk with us about his research into the cognitive mechanisms underlying the illusion. Here’s Mike Vitevitch.

Vitevitch: Hi! I’m Mike Vitevitch. I’m the chair and professor of psychology at the University of Kansas, and I’m a language researcher, I’m a speech researcher. Most of the people that had looked at this before were music cognition people. So, I was bringing just a very different perspective, very different lens, very different set of tools, very different set of theories to bear on this illusion. So, I think that was kind of a neat perspective that we were able to bring as language researchers. There’s been a lot of work looking at these connections between music and language. Seems like a lot of it comes from the music side. So well, why aren’t we, as you know language people, kind of doing a bit more to look at this, and then looking at that as a possible way to see how the system works? So, one of the things that really made this kind of interesting for me is — I really don’t know a lot about music and what other authors — Josh Mendoza was a composition major. He was in my intro psych class and when I did, I had you know did this as a demo, his eyes were like the size of saucers, his jaw was on the desk like all this is so cool. How do you do that, what guy like work on this. So, I was like yeah sure all right cool! So, it was nice to have somebody who really knew a lot about music, you know, working on this.

Read More