Psycholinguistics and neurolinguistics episodes
Where in the brain is language located, and how do we know? In this episode, we talk about neurolinguistics, the two main areas in the brain that are in charge of language, and two different neuroimaging techniques we use to look at where and when the brain does all its linguistic magic.
What steps does our brain go through when we encounter language, and how can we measure what those steps are? In this episode, we talk about event-related brain potentials (ERPs): the small electrical changes that we can see when the brain responds to stimuli. We also go over some of the basic steps in processing language that ERPs can show us, for sounds, meaning, and syntax.
What happens when we lose our ability to use language? What difficulties do we run into when studying language loss? In this episode, we look at aphasia, and particularly Broca's aphasia: what symptoms occur, why it's hard to make sweeping generalizations about what to expect, and what aphasia can tell us about how language works.
How do we deal with gender when we process language? Do we take it into consideration when we hear words and sentences? In this week's episode, we talk about gender and language processing: the different kinds of gender in language, how gender influences our ability to retrieve words from our mental dictionaries, and how our views on gender temporarily keep us from considering otherwise legitimate interpretations of sentences.
Why do we sometimes hear things that aren't there? When does the way we process language leave cracks for illusions to appear? In this episode, we talk about phonological illusions: what varieties there are, the processing strategies that lead to them, and how sometimes, we even try interpreting sounds as language when they aren't linguistic at all.
What connections do we make when we encounter language? How long does it take us to spot these ties? In this episode, we talk about priming: what it is, how far those connections go, and why our minds aren't exploding with the complexity of the web we drag through our sentences all the time.
How do we build sentences based on what we see and hear? What approaches do we take to work out what's being said? In this episode, we talk about parsing strategies: why we need them, what they do for us, and how they can sometimes lead us to making weird interpretations.
What do our eyes do while we read written language? What can their movements tell us about our processing? In this episode, we look at eye tracking: how we can measure these small movements, what following how people read shows us about processing, and how even just studying how we look at pictures can unlock how our brains approach incoming words.
How much meaning is there just in sounds? How much are words alike across languages? In this episode, we talk about the arbitrariness of the sign: how our sounds don't have to connect to the meanings they do, how much cases like onomatopoeia serve as a counter to the random matching of words, and whether individual sounds or syllables carry their own semantic punch.
Why do so many words and sentences have multiple meanings? How do we deal with all of the overlaps? In this episode, we talk about ambiguity: where it comes from, how we deal with processing it, and how children pick meanings from the menu of semantic possibilities they're presented with.